Upload folder using huggingface_hub
Browse filesThis view is limited to 50 files because it contains too many changes.
See raw diff
- all_sources_contextual_nodes.pkl +2 -2
- chroma-db-all_sources/{81a9f2d3-1163-41c5-b154-656c5cd21f7c → c50946fb-91db-4b3a-81a8-507f4e24e0fc}/data_level0.bin +1 -1
- chroma-db-all_sources/{81a9f2d3-1163-41c5-b154-656c5cd21f7c → c50946fb-91db-4b3a-81a8-507f4e24e0fc}/header.bin +1 -1
- chroma-db-all_sources/{81a9f2d3-1163-41c5-b154-656c5cd21f7c → c50946fb-91db-4b3a-81a8-507f4e24e0fc}/index_metadata.pickle +1 -1
- chroma-db-all_sources/{81a9f2d3-1163-41c5-b154-656c5cd21f7c → c50946fb-91db-4b3a-81a8-507f4e24e0fc}/length.bin +1 -1
- chroma-db-all_sources/{81a9f2d3-1163-41c5-b154-656c5cd21f7c → c50946fb-91db-4b3a-81a8-507f4e24e0fc}/link_lists.bin +1 -1
- chroma-db-all_sources/chroma.sqlite3 +2 -2
- chroma-db-all_sources/document_dict_all_sources.pkl +2 -2
- langchain_md_files/_templates/integration.mdx +60 -0
- langchain_md_files/additional_resources/arxiv_references.mdx +1101 -0
- langchain_md_files/additional_resources/dependents.mdx +554 -0
- langchain_md_files/additional_resources/tutorials.mdx +52 -0
- langchain_md_files/additional_resources/youtube.mdx +63 -0
- langchain_md_files/changes/changelog/core.mdx +10 -0
- langchain_md_files/changes/changelog/langchain.mdx +93 -0
- langchain_md_files/concepts/agents.mdx +25 -0
- langchain_md_files/concepts/architecture.mdx +78 -0
- langchain_md_files/concepts/async.mdx +81 -0
- langchain_md_files/concepts/callbacks.mdx +73 -0
- langchain_md_files/concepts/chat_history.mdx +46 -0
- langchain_md_files/concepts/chat_models.mdx +168 -0
- langchain_md_files/concepts/document_loaders.mdx +45 -0
- langchain_md_files/concepts/embedding_models.mdx +130 -0
- langchain_md_files/concepts/evaluation.mdx +17 -0
- langchain_md_files/concepts/example_selectors.mdx +20 -0
- langchain_md_files/concepts/few_shot_prompting.mdx +85 -0
- langchain_md_files/concepts/index.mdx +95 -0
- langchain_md_files/concepts/key_value_stores.mdx +38 -0
- langchain_md_files/concepts/lcel.mdx +221 -0
- langchain_md_files/concepts/messages.mdx +245 -0
- langchain_md_files/concepts/multimodality.mdx +88 -0
- langchain_md_files/concepts/output_parsers.mdx +42 -0
- langchain_md_files/concepts/prompt_templates.mdx +79 -0
- langchain_md_files/concepts/rag.mdx +98 -0
- langchain_md_files/concepts/retrieval.mdx +242 -0
- langchain_md_files/concepts/retrievers.mdx +145 -0
- langchain_md_files/concepts/runnables.mdx +352 -0
- langchain_md_files/concepts/streaming.mdx +191 -0
- langchain_md_files/concepts/structured_outputs.mdx +148 -0
- langchain_md_files/concepts/testing.mdx +81 -0
- langchain_md_files/concepts/text_llms.mdx +10 -0
- langchain_md_files/concepts/text_splitters.mdx +135 -0
- langchain_md_files/concepts/tokens.mdx +58 -0
- langchain_md_files/concepts/tool_calling.mdx +149 -0
- langchain_md_files/concepts/tools.mdx +211 -0
- langchain_md_files/concepts/tracing.mdx +10 -0
- langchain_md_files/concepts/vectorstores.mdx +191 -0
- langchain_md_files/concepts/why_langchain.mdx +109 -0
- langchain_md_files/contributing/how_to/code/guidelines.mdx +35 -0
- langchain_md_files/contributing/how_to/code/index.mdx +6 -0
all_sources_contextual_nodes.pkl
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:a3fb01a8d69f1af5b8c9d863a0a40d109f812696a43a6ce2a3b420458be4bc49
|
3 |
+
size 112785806
|
chroma-db-all_sources/{81a9f2d3-1163-41c5-b154-656c5cd21f7c → c50946fb-91db-4b3a-81a8-507f4e24e0fc}/data_level0.bin
RENAMED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 135552000
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:e8b876ac2e179211d41424ec19f40153fa118545766dee59f9753a73b04f350f
|
3 |
size 135552000
|
chroma-db-all_sources/{81a9f2d3-1163-41c5-b154-656c5cd21f7c → c50946fb-91db-4b3a-81a8-507f4e24e0fc}/header.bin
RENAMED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 100
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:595a4f1b655e01205b66b5f692d44a48da44acf9a2ad5155a223d082d235bae3
|
3 |
size 100
|
chroma-db-all_sources/{81a9f2d3-1163-41c5-b154-656c5cd21f7c → c50946fb-91db-4b3a-81a8-507f4e24e0fc}/index_metadata.pickle
RENAMED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 1854390
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:ef4d9945fed0d6ed7a0b8a619af49e5c8524c1a61e2a61c5dfd554e6af9ffb3d
|
3 |
size 1854390
|
chroma-db-all_sources/{81a9f2d3-1163-41c5-b154-656c5cd21f7c → c50946fb-91db-4b3a-81a8-507f4e24e0fc}/length.bin
RENAMED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 128000
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:6763f97c8f35167fe68a1f7b8b22a14a3e148614522e9df6ad82ac65c24c4cb0
|
3 |
size 128000
|
chroma-db-all_sources/{81a9f2d3-1163-41c5-b154-656c5cd21f7c → c50946fb-91db-4b3a-81a8-507f4e24e0fc}/link_lists.bin
RENAMED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 277872
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:fdfa1a217488e83bd57b7d13985ce0f4379f577eec289316b442b3ebe7dbb79f
|
3 |
size 277872
|
chroma-db-all_sources/chroma.sqlite3
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:f8bc923548681b7c546e6e45d6dfd796876b67066118cf3fe6b22cfeca1524b2
|
3 |
+
size 947904512
|
chroma-db-all_sources/document_dict_all_sources.pkl
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:4bee37cedea3099ac8b65a74904688d771f624c97097f0c67e65b163b3967b22
|
3 |
+
size 87955987
|
langchain_md_files/_templates/integration.mdx
ADDED
@@ -0,0 +1,60 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
[comment: Please, a reference example here "docs/integrations/arxiv.md"]::
|
2 |
+
[comment: Use this template to create a new .md file in "docs/integrations/"]::
|
3 |
+
|
4 |
+
# Title_REPLACE_ME
|
5 |
+
|
6 |
+
[comment: Only one Tile/H1 is allowed!]::
|
7 |
+
|
8 |
+
>
|
9 |
+
[comment: Description: After reading this description, a reader should decide if this integration is good enough to try/follow reading OR]::
|
10 |
+
[comment: go to read the next integration doc. ]::
|
11 |
+
[comment: Description should include a link to the source for follow reading.]::
|
12 |
+
|
13 |
+
## Installation and Setup
|
14 |
+
|
15 |
+
[comment: Installation and Setup: All necessary additional package installations and setups for Tokens, etc]::
|
16 |
+
|
17 |
+
```bash
|
18 |
+
pip install package_name_REPLACE_ME
|
19 |
+
```
|
20 |
+
|
21 |
+
[comment: OR this text:]::
|
22 |
+
|
23 |
+
There isn't any special setup for it.
|
24 |
+
|
25 |
+
[comment: The next H2/## sections with names of the integration modules, like "LLM", "Text Embedding Models", etc]::
|
26 |
+
[comment: see "Modules" in the "index.html" page]::
|
27 |
+
[comment: Each H2 section should include a link to an example(s) and a Python code with the import of the integration class]::
|
28 |
+
[comment: Below are several example sections. Remove all unnecessary sections. Add all necessary sections not provided here.]::
|
29 |
+
|
30 |
+
## LLM
|
31 |
+
|
32 |
+
See a [usage example](/docs/integrations/llms/INCLUDE_REAL_NAME).
|
33 |
+
|
34 |
+
```python
|
35 |
+
from langchain_community.llms import integration_class_REPLACE_ME
|
36 |
+
```
|
37 |
+
|
38 |
+
## Text Embedding Models
|
39 |
+
|
40 |
+
See a [usage example](/docs/integrations/text_embedding/INCLUDE_REAL_NAME).
|
41 |
+
|
42 |
+
```python
|
43 |
+
from langchain_community.embeddings import integration_class_REPLACE_ME
|
44 |
+
```
|
45 |
+
|
46 |
+
## Chat models
|
47 |
+
|
48 |
+
See a [usage example](/docs/integrations/chat/INCLUDE_REAL_NAME).
|
49 |
+
|
50 |
+
```python
|
51 |
+
from langchain_community.chat_models import integration_class_REPLACE_ME
|
52 |
+
```
|
53 |
+
|
54 |
+
## Document Loader
|
55 |
+
|
56 |
+
See a [usage example](/docs/integrations/document_loaders/INCLUDE_REAL_NAME).
|
57 |
+
|
58 |
+
```python
|
59 |
+
from langchain_community.document_loaders import integration_class_REPLACE_ME
|
60 |
+
```
|
langchain_md_files/additional_resources/arxiv_references.mdx
ADDED
@@ -0,0 +1,1101 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# arXiv
|
2 |
+
|
3 |
+
LangChain implements the latest research in the field of Natural Language Processing.
|
4 |
+
This page contains `arXiv` papers referenced in the LangChain Documentation, API Reference,
|
5 |
+
Templates, and Cookbooks.
|
6 |
+
|
7 |
+
From the opposite direction, scientists use `LangChain` in research and reference it in the research papers.
|
8 |
+
|
9 |
+
`arXiv` papers with references to:
|
10 |
+
[LangChain](https://arxiv.org/search/?query=langchain&searchtype=all&source=header) | [LangGraph](https://arxiv.org/search/?query=langgraph&searchtype=all&source=header) | [LangSmith](https://arxiv.org/search/?query=langsmith&searchtype=all&source=header)
|
11 |
+
|
12 |
+
## Summary
|
13 |
+
|
14 |
+
| arXiv id / Title | Authors | Published date 🔻 | LangChain Documentation|
|
15 |
+
|------------------|---------|-------------------|------------------------|
|
16 |
+
| `2403.14403v2` [Adaptive-RAG: Learning to Adapt Retrieval-Augmented Large Language Models through Question Complexity](http://arxiv.org/abs/2403.14403v2) | Soyeong Jeong, Jinheon Baek, Sukmin Cho, et al. | 2024‑03‑21 | `Docs:` [docs/concepts](https://python.langchain.com/docs/concepts)
|
17 |
+
| `2402.03620v1` [Self-Discover: Large Language Models Self-Compose Reasoning Structures](http://arxiv.org/abs/2402.03620v1) | Pei Zhou, Jay Pujara, Xiang Ren, et al. | 2024‑02‑06 | `Cookbook:` [Self-Discover](https://github.com/langchain-ai/langchain/blob/master/cookbook/self-discover.ipynb)
|
18 |
+
| `2402.03367v2` [RAG-Fusion: a New Take on Retrieval-Augmented Generation](http://arxiv.org/abs/2402.03367v2) | Zackary Rackauckas | 2024‑01‑31 | `Docs:` [docs/concepts](https://python.langchain.com/docs/concepts)
|
19 |
+
| `2401.18059v1` [RAPTOR: Recursive Abstractive Processing for Tree-Organized Retrieval](http://arxiv.org/abs/2401.18059v1) | Parth Sarthi, Salman Abdullah, Aditi Tuli, et al. | 2024‑01‑31 | `Cookbook:` [Raptor](https://github.com/langchain-ai/langchain/blob/master/cookbook/RAPTOR.ipynb)
|
20 |
+
| `2401.15884v2` [Corrective Retrieval Augmented Generation](http://arxiv.org/abs/2401.15884v2) | Shi-Qi Yan, Jia-Chen Gu, Yun Zhu, et al. | 2024‑01‑29 | `Docs:` [docs/concepts](https://python.langchain.com/docs/concepts), `Cookbook:` [Langgraph Crag](https://github.com/langchain-ai/langchain/blob/master/cookbook/langgraph_crag.ipynb)
|
21 |
+
| `2401.08500v1` [Code Generation with AlphaCodium: From Prompt Engineering to Flow Engineering](http://arxiv.org/abs/2401.08500v1) | Tal Ridnik, Dedy Kredo, Itamar Friedman | 2024‑01‑16 | `Docs:` [docs/concepts](https://python.langchain.com/docs/concepts)
|
22 |
+
| `2401.04088v1` [Mixtral of Experts](http://arxiv.org/abs/2401.04088v1) | Albert Q. Jiang, Alexandre Sablayrolles, Antoine Roux, et al. | 2024‑01‑08 | `Cookbook:` [Together Ai](https://github.com/langchain-ai/langchain/blob/master/cookbook/together_ai.ipynb)
|
23 |
+
| `2312.06648v2` [Dense X Retrieval: What Retrieval Granularity Should We Use?](http://arxiv.org/abs/2312.06648v2) | Tong Chen, Hongwei Wang, Sihao Chen, et al. | 2023‑12‑11 | `Template:` [propositional-retrieval](https://python.langchain.com/docs/templates/propositional-retrieval)
|
24 |
+
| `2311.09210v1` [Chain-of-Note: Enhancing Robustness in Retrieval-Augmented Language Models](http://arxiv.org/abs/2311.09210v1) | Wenhao Yu, Hongming Zhang, Xiaoman Pan, et al. | 2023‑11‑15 | `Template:` [chain-of-note-wiki](https://python.langchain.com/docs/templates/chain-of-note-wiki)
|
25 |
+
| `2310.11511v1` [Self-RAG: Learning to Retrieve, Generate, and Critique through Self-Reflection](http://arxiv.org/abs/2310.11511v1) | Akari Asai, Zeqiu Wu, Yizhong Wang, et al. | 2023‑10‑17 | `Docs:` [docs/concepts](https://python.langchain.com/docs/concepts), `Cookbook:` [Langgraph Self Rag](https://github.com/langchain-ai/langchain/blob/master/cookbook/langgraph_self_rag.ipynb)
|
26 |
+
| `2310.06117v2` [Take a Step Back: Evoking Reasoning via Abstraction in Large Language Models](http://arxiv.org/abs/2310.06117v2) | Huaixiu Steven Zheng, Swaroop Mishra, Xinyun Chen, et al. | 2023‑10‑09 | `Docs:` [docs/concepts](https://python.langchain.com/docs/concepts), `Template:` [stepback-qa-prompting](https://python.langchain.com/docs/templates/stepback-qa-prompting), `Cookbook:` [Stepback-Qa](https://github.com/langchain-ai/langchain/blob/master/cookbook/stepback-qa.ipynb)
|
27 |
+
| `2307.15337v3` [Skeleton-of-Thought: Prompting LLMs for Efficient Parallel Generation](http://arxiv.org/abs/2307.15337v3) | Xuefei Ning, Zinan Lin, Zixuan Zhou, et al. | 2023‑07‑28 | `Template:` [skeleton-of-thought](https://python.langchain.com/docs/templates/skeleton-of-thought)
|
28 |
+
| `2307.09288v2` [Llama 2: Open Foundation and Fine-Tuned Chat Models](http://arxiv.org/abs/2307.09288v2) | Hugo Touvron, Louis Martin, Kevin Stone, et al. | 2023‑07‑18 | `Cookbook:` [Semi Structured Rag](https://github.com/langchain-ai/langchain/blob/master/cookbook/Semi_Structured_RAG.ipynb)
|
29 |
+
| `2307.03172v3` [Lost in the Middle: How Language Models Use Long Contexts](http://arxiv.org/abs/2307.03172v3) | Nelson F. Liu, Kevin Lin, John Hewitt, et al. | 2023‑07‑06 | `Docs:` [docs/how_to/long_context_reorder](https://python.langchain.com/docs/how_to/long_context_reorder)
|
30 |
+
| `2305.14283v3` [Query Rewriting for Retrieval-Augmented Large Language Models](http://arxiv.org/abs/2305.14283v3) | Xinbei Ma, Yeyun Gong, Pengcheng He, et al. | 2023‑05‑23 | `Template:` [rewrite-retrieve-read](https://python.langchain.com/docs/templates/rewrite-retrieve-read), `Cookbook:` [Rewrite](https://github.com/langchain-ai/langchain/blob/master/cookbook/rewrite.ipynb)
|
31 |
+
| `2305.08291v1` [Large Language Model Guided Tree-of-Thought](http://arxiv.org/abs/2305.08291v1) | Jieyi Long | 2023‑05‑15 | `API:` [langchain_experimental.tot](https://python.langchain.com/api_reference/experimental/tot.html), `Cookbook:` [Tree Of Thought](https://github.com/langchain-ai/langchain/blob/master/cookbook/tree_of_thought.ipynb)
|
32 |
+
| `2305.04091v3` [Plan-and-Solve Prompting: Improving Zero-Shot Chain-of-Thought Reasoning by Large Language Models](http://arxiv.org/abs/2305.04091v3) | Lei Wang, Wanyu Xu, Yihuai Lan, et al. | 2023‑05‑06 | `Cookbook:` [Plan And Execute Agent](https://github.com/langchain-ai/langchain/blob/master/cookbook/plan_and_execute_agent.ipynb)
|
33 |
+
| `2305.02156v1` [Zero-Shot Listwise Document Reranking with a Large Language Model](http://arxiv.org/abs/2305.02156v1) | Xueguang Ma, Xinyu Zhang, Ronak Pradeep, et al. | 2023‑05‑03 | `Docs:` [docs/how_to/contextual_compression](https://python.langchain.com/docs/how_to/contextual_compression), `API:` [langchain...LLMListwiseRerank](https://python.langchain.com/api_reference/langchain/retrievers/langchain.retrievers.document_compressors.listwise_rerank.LLMListwiseRerank.html#)
|
34 |
+
| `2304.08485v2` [Visual Instruction Tuning](http://arxiv.org/abs/2304.08485v2) | Haotian Liu, Chunyuan Li, Qingyang Wu, et al. | 2023‑04‑17 | `Cookbook:` [Semi Structured Multi Modal Rag Llama2](https://github.com/langchain-ai/langchain/blob/master/cookbook/Semi_structured_multi_modal_RAG_LLaMA2.ipynb), [Semi Structured And Multi Modal Rag](https://github.com/langchain-ai/langchain/blob/master/cookbook/Semi_structured_and_multi_modal_RAG.ipynb)
|
35 |
+
| `2304.03442v2` [Generative Agents: Interactive Simulacra of Human Behavior](http://arxiv.org/abs/2304.03442v2) | Joon Sung Park, Joseph C. O'Brien, Carrie J. Cai, et al. | 2023‑04‑07 | `Cookbook:` [Generative Agents Interactive Simulacra Of Human Behavior](https://github.com/langchain-ai/langchain/blob/master/cookbook/generative_agents_interactive_simulacra_of_human_behavior.ipynb), [Multiagent Bidding](https://github.com/langchain-ai/langchain/blob/master/cookbook/multiagent_bidding.ipynb)
|
36 |
+
| `2303.17760v2` [CAMEL: Communicative Agents for "Mind" Exploration of Large Language Model Society](http://arxiv.org/abs/2303.17760v2) | Guohao Li, Hasan Abed Al Kader Hammoud, Hani Itani, et al. | 2023‑03‑31 | `Cookbook:` [Camel Role Playing](https://github.com/langchain-ai/langchain/blob/master/cookbook/camel_role_playing.ipynb)
|
37 |
+
| `2303.17580v4` [HuggingGPT: Solving AI Tasks with ChatGPT and its Friends in Hugging Face](http://arxiv.org/abs/2303.17580v4) | Yongliang Shen, Kaitao Song, Xu Tan, et al. | 2023‑03‑30 | `API:` [langchain_experimental.autonomous_agents](https://python.langchain.com/api_reference/experimental/autonomous_agents.html), `Cookbook:` [Hugginggpt](https://github.com/langchain-ai/langchain/blob/master/cookbook/hugginggpt.ipynb)
|
38 |
+
| `2301.10226v4` [A Watermark for Large Language Models](http://arxiv.org/abs/2301.10226v4) | John Kirchenbauer, Jonas Geiping, Yuxin Wen, et al. | 2023‑01‑24 | `API:` [langchain_community...OCIModelDeploymentTGI](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.oci_data_science_model_deployment_endpoint.OCIModelDeploymentTGI.html#langchain_community.llms.oci_data_science_model_deployment_endpoint.OCIModelDeploymentTGI), [langchain_huggingface...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint), [langchain_community...HuggingFaceTextGenInference](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference.html#langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference), [langchain_community...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint)
|
39 |
+
| `2212.10496v1` [Precise Zero-Shot Dense Retrieval without Relevance Labels](http://arxiv.org/abs/2212.10496v1) | Luyu Gao, Xueguang Ma, Jimmy Lin, et al. | 2022‑12‑20 | `Docs:` [docs/concepts](https://python.langchain.com/docs/concepts), `API:` [langchain...HypotheticalDocumentEmbedder](https://api.python.langchain.com/en/latest/chains/langchain.chains.hyde.base.HypotheticalDocumentEmbedder.html#langchain.chains.hyde.base.HypotheticalDocumentEmbedder), `Template:` [hyde](https://python.langchain.com/docs/templates/hyde), `Cookbook:` [Hypothetical Document Embeddings](https://github.com/langchain-ai/langchain/blob/master/cookbook/hypothetical_document_embeddings.ipynb)
|
40 |
+
| `2212.08073v1` [Constitutional AI: Harmlessness from AI Feedback](http://arxiv.org/abs/2212.08073v1) | Yuntao Bai, Saurav Kadavath, Sandipan Kundu, et al. | 2022‑12‑15 | `Docs:` [docs/versions/migrating_chains/constitutional_chain](https://python.langchain.com/docs/versions/migrating_chains/constitutional_chain)
|
41 |
+
| `2212.07425v3` [Robust and Explainable Identification of Logical Fallacies in Natural Language Arguments](http://arxiv.org/abs/2212.07425v3) | Zhivar Sourati, Vishnu Priya Prasanna Venkatesh, Darshan Deshpande, et al. | 2022‑12‑12 | `API:` [langchain_experimental.fallacy_removal](https://python.langchain.com/api_reference/experimental/fallacy_removal.html)
|
42 |
+
| `2211.13892v2` [Complementary Explanations for Effective In-Context Learning](http://arxiv.org/abs/2211.13892v2) | Xi Ye, Srinivasan Iyer, Asli Celikyilmaz, et al. | 2022‑11‑25 | `API:` [langchain_core...MaxMarginalRelevanceExampleSelector](https://api.python.langchain.com/en/latest/example_selectors/langchain_core.example_selectors.semantic_similarity.MaxMarginalRelevanceExampleSelector.html#langchain_core.example_selectors.semantic_similarity.MaxMarginalRelevanceExampleSelector)
|
43 |
+
| `2211.10435v2` [PAL: Program-aided Language Models](http://arxiv.org/abs/2211.10435v2) | Luyu Gao, Aman Madaan, Shuyan Zhou, et al. | 2022‑11‑18 | `API:` [langchain_experimental.pal_chain](https://python.langchain.com/api_reference/experimental/pal_chain.html), [langchain_experimental...PALChain](https://api.python.langchain.com/en/latest/pal_chain/langchain_experimental.pal_chain.base.PALChain.html#langchain_experimental.pal_chain.base.PALChain), `Cookbook:` [Program Aided Language Model](https://github.com/langchain-ai/langchain/blob/master/cookbook/program_aided_language_model.ipynb)
|
44 |
+
| `2210.11934v2` [An Analysis of Fusion Functions for Hybrid Retrieval](http://arxiv.org/abs/2210.11934v2) | Sebastian Bruch, Siyu Gai, Amir Ingber | 2022‑10‑21 | `Docs:` [docs/concepts](https://python.langchain.com/docs/concepts)
|
45 |
+
| `2210.03629v3` [ReAct: Synergizing Reasoning and Acting in Language Models](http://arxiv.org/abs/2210.03629v3) | Shunyu Yao, Jeffrey Zhao, Dian Yu, et al. | 2022‑10‑06 | `Docs:` [docs/integrations/tools/ionic_shopping](https://python.langchain.com/docs/integrations/tools/ionic_shopping), [docs/integrations/providers/cohere](https://python.langchain.com/docs/integrations/providers/cohere), [docs/concepts](https://python.langchain.com/docs/concepts), `API:` [langchain...create_react_agent](https://api.python.langchain.com/en/latest/agents/langchain.agents.react.agent.create_react_agent.html#langchain.agents.react.agent.create_react_agent), [langchain...TrajectoryEvalChain](https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.agents.trajectory_eval_chain.TrajectoryEvalChain.html#langchain.evaluation.agents.trajectory_eval_chain.TrajectoryEvalChain)
|
46 |
+
| `2209.10785v2` [Deep Lake: a Lakehouse for Deep Learning](http://arxiv.org/abs/2209.10785v2) | Sasun Hambardzumyan, Abhinav Tuli, Levon Ghukasyan, et al. | 2022‑09‑22 | `Docs:` [docs/integrations/providers/activeloop_deeplake](https://python.langchain.com/docs/integrations/providers/activeloop_deeplake)
|
47 |
+
| `2205.13147v4` [Matryoshka Representation Learning](http://arxiv.org/abs/2205.13147v4) | Aditya Kusupati, Gantavya Bhatt, Aniket Rege, et al. | 2022‑05‑26 | `Docs:` [docs/integrations/providers/snowflake](https://python.langchain.com/docs/integrations/providers/snowflake)
|
48 |
+
| `2205.12654v1` [Bitext Mining Using Distilled Sentence Representations for Low-Resource Languages](http://arxiv.org/abs/2205.12654v1) | Kevin Heffernan, Onur Çelebi, Holger Schwenk | 2022‑05‑25 | `API:` [langchain_community...LaserEmbeddings](https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.laser.LaserEmbeddings.html#langchain_community.embeddings.laser.LaserEmbeddings)
|
49 |
+
| `2204.00498v1` [Evaluating the Text-to-SQL Capabilities of Large Language Models](http://arxiv.org/abs/2204.00498v1) | Nitarshan Rajkumar, Raymond Li, Dzmitry Bahdanau | 2022‑03‑15 | `Docs:` [docs/tutorials/sql_qa](https://python.langchain.com/docs/tutorials/sql_qa), `API:` [langchain_community...SQLDatabase](https://api.python.langchain.com/en/latest/utilities/langchain_community.utilities.sql_database.SQLDatabase.html#langchain_community.utilities.sql_database.SQLDatabase), [langchain_community...SparkSQL](https://api.python.langchain.com/en/latest/utilities/langchain_community.utilities.spark_sql.SparkSQL.html#langchain_community.utilities.spark_sql.SparkSQL)
|
50 |
+
| `2202.00666v5` [Locally Typical Sampling](http://arxiv.org/abs/2202.00666v5) | Clara Meister, Tiago Pimentel, Gian Wiher, et al. | 2022‑02‑01 | `API:` [langchain_huggingface...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint), [langchain_community...HuggingFaceTextGenInference](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference.html#langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference), [langchain_community...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint)
|
51 |
+
| `2112.01488v3` [ColBERTv2: Effective and Efficient Retrieval via Lightweight Late Interaction](http://arxiv.org/abs/2112.01488v3) | Keshav Santhanam, Omar Khattab, Jon Saad-Falcon, et al. | 2021‑12‑02 | `Docs:` [docs/integrations/retrievers/ragatouille](https://python.langchain.com/docs/integrations/retrievers/ragatouille), [docs/integrations/providers/ragatouille](https://python.langchain.com/docs/integrations/providers/ragatouille), [docs/concepts](https://python.langchain.com/docs/concepts), [docs/integrations/providers/dspy](https://python.langchain.com/docs/integrations/providers/dspy)
|
52 |
+
| `2103.00020v1` [Learning Transferable Visual Models From Natural Language Supervision](http://arxiv.org/abs/2103.00020v1) | Alec Radford, Jong Wook Kim, Chris Hallacy, et al. | 2021‑02‑26 | `API:` [langchain_experimental.open_clip](https://python.langchain.com/api_reference/experimental/open_clip.html)
|
53 |
+
| `2005.14165v4` [Language Models are Few-Shot Learners](http://arxiv.org/abs/2005.14165v4) | Tom B. Brown, Benjamin Mann, Nick Ryder, et al. | 2020‑05‑28 | `Docs:` [docs/concepts](https://python.langchain.com/docs/concepts)
|
54 |
+
| `2005.11401v4` [Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks](http://arxiv.org/abs/2005.11401v4) | Patrick Lewis, Ethan Perez, Aleksandra Piktus, et al. | 2020‑05‑22 | `Docs:` [docs/concepts](https://python.langchain.com/docs/concepts)
|
55 |
+
| `1909.05858v2` [CTRL: A Conditional Transformer Language Model for Controllable Generation](http://arxiv.org/abs/1909.05858v2) | Nitish Shirish Keskar, Bryan McCann, Lav R. Varshney, et al. | 2019‑09‑11 | `API:` [langchain_huggingface...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint), [langchain_community...HuggingFaceTextGenInference](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference.html#langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference), [langchain_community...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint)
|
56 |
+
|
57 |
+
## Adaptive-RAG: Learning to Adapt Retrieval-Augmented Large Language Models through Question Complexity
|
58 |
+
|
59 |
+
- **Authors:** Soyeong Jeong, Jinheon Baek, Sukmin Cho, et al.
|
60 |
+
- **arXiv id:** [2403.14403v2](http://arxiv.org/abs/2403.14403v2) **Published Date:** 2024-03-21
|
61 |
+
- **LangChain:**
|
62 |
+
|
63 |
+
- **Documentation:** [docs/concepts](https://python.langchain.com/docs/concepts)
|
64 |
+
|
65 |
+
**Abstract:** Retrieval-Augmented Large Language Models (LLMs), which incorporate the
|
66 |
+
non-parametric knowledge from external knowledge bases into LLMs, have emerged
|
67 |
+
as a promising approach to enhancing response accuracy in several tasks, such
|
68 |
+
as Question-Answering (QA). However, even though there are various approaches
|
69 |
+
dealing with queries of different complexities, they either handle simple
|
70 |
+
queries with unnecessary computational overhead or fail to adequately address
|
71 |
+
complex multi-step queries; yet, not all user requests fall into only one of
|
72 |
+
the simple or complex categories. In this work, we propose a novel adaptive QA
|
73 |
+
framework, that can dynamically select the most suitable strategy for
|
74 |
+
(retrieval-augmented) LLMs from the simplest to the most sophisticated ones
|
75 |
+
based on the query complexity. Also, this selection process is operationalized
|
76 |
+
with a classifier, which is a smaller LM trained to predict the complexity
|
77 |
+
level of incoming queries with automatically collected labels, obtained from
|
78 |
+
actual predicted outcomes of models and inherent inductive biases in datasets.
|
79 |
+
This approach offers a balanced strategy, seamlessly adapting between the
|
80 |
+
iterative and single-step retrieval-augmented LLMs, as well as the no-retrieval
|
81 |
+
methods, in response to a range of query complexities. We validate our model on
|
82 |
+
a set of open-domain QA datasets, covering multiple query complexities, and
|
83 |
+
show that ours enhances the overall efficiency and accuracy of QA systems,
|
84 |
+
compared to relevant baselines including the adaptive retrieval approaches.
|
85 |
+
Code is available at: https://github.com/starsuzi/Adaptive-RAG.
|
86 |
+
|
87 |
+
## Self-Discover: Large Language Models Self-Compose Reasoning Structures
|
88 |
+
|
89 |
+
- **Authors:** Pei Zhou, Jay Pujara, Xiang Ren, et al.
|
90 |
+
- **arXiv id:** [2402.03620v1](http://arxiv.org/abs/2402.03620v1) **Published Date:** 2024-02-06
|
91 |
+
- **LangChain:**
|
92 |
+
|
93 |
+
- **Cookbook:** [self-discover](https://github.com/langchain-ai/langchain/blob/master/cookbook/self-discover.ipynb)
|
94 |
+
|
95 |
+
**Abstract:** We introduce SELF-DISCOVER, a general framework for LLMs to self-discover the
|
96 |
+
task-intrinsic reasoning structures to tackle complex reasoning problems that
|
97 |
+
are challenging for typical prompting methods. Core to the framework is a
|
98 |
+
self-discovery process where LLMs select multiple atomic reasoning modules such
|
99 |
+
as critical thinking and step-by-step thinking, and compose them into an
|
100 |
+
explicit reasoning structure for LLMs to follow during decoding. SELF-DISCOVER
|
101 |
+
substantially improves GPT-4 and PaLM 2's performance on challenging reasoning
|
102 |
+
benchmarks such as BigBench-Hard, grounded agent reasoning, and MATH, by as
|
103 |
+
much as 32% compared to Chain of Thought (CoT). Furthermore, SELF-DISCOVER
|
104 |
+
outperforms inference-intensive methods such as CoT-Self-Consistency by more
|
105 |
+
than 20%, while requiring 10-40x fewer inference compute. Finally, we show that
|
106 |
+
the self-discovered reasoning structures are universally applicable across
|
107 |
+
model families: from PaLM 2-L to GPT-4, and from GPT-4 to Llama2, and share
|
108 |
+
commonalities with human reasoning patterns.
|
109 |
+
|
110 |
+
## RAG-Fusion: a New Take on Retrieval-Augmented Generation
|
111 |
+
|
112 |
+
- **Authors:** Zackary Rackauckas
|
113 |
+
- **arXiv id:** [2402.03367v2](http://arxiv.org/abs/2402.03367v2) **Published Date:** 2024-01-31
|
114 |
+
- **LangChain:**
|
115 |
+
|
116 |
+
- **Documentation:** [docs/concepts](https://python.langchain.com/docs/concepts)
|
117 |
+
|
118 |
+
**Abstract:** Infineon has identified a need for engineers, account managers, and customers
|
119 |
+
to rapidly obtain product information. This problem is traditionally addressed
|
120 |
+
with retrieval-augmented generation (RAG) chatbots, but in this study, I
|
121 |
+
evaluated the use of the newly popularized RAG-Fusion method. RAG-Fusion
|
122 |
+
combines RAG and reciprocal rank fusion (RRF) by generating multiple queries,
|
123 |
+
reranking them with reciprocal scores and fusing the documents and scores.
|
124 |
+
Through manually evaluating answers on accuracy, relevance, and
|
125 |
+
comprehensiveness, I found that RAG-Fusion was able to provide accurate and
|
126 |
+
comprehensive answers due to the generated queries contextualizing the original
|
127 |
+
query from various perspectives. However, some answers strayed off topic when
|
128 |
+
the generated queries' relevance to the original query is insufficient. This
|
129 |
+
research marks significant progress in artificial intelligence (AI) and natural
|
130 |
+
language processing (NLP) applications and demonstrates transformations in a
|
131 |
+
global and multi-industry context.
|
132 |
+
|
133 |
+
## RAPTOR: Recursive Abstractive Processing for Tree-Organized Retrieval
|
134 |
+
|
135 |
+
- **Authors:** Parth Sarthi, Salman Abdullah, Aditi Tuli, et al.
|
136 |
+
- **arXiv id:** [2401.18059v1](http://arxiv.org/abs/2401.18059v1) **Published Date:** 2024-01-31
|
137 |
+
- **LangChain:**
|
138 |
+
|
139 |
+
- **Cookbook:** [RAPTOR](https://github.com/langchain-ai/langchain/blob/master/cookbook/RAPTOR.ipynb)
|
140 |
+
|
141 |
+
**Abstract:** Retrieval-augmented language models can better adapt to changes in world
|
142 |
+
state and incorporate long-tail knowledge. However, most existing methods
|
143 |
+
retrieve only short contiguous chunks from a retrieval corpus, limiting
|
144 |
+
holistic understanding of the overall document context. We introduce the novel
|
145 |
+
approach of recursively embedding, clustering, and summarizing chunks of text,
|
146 |
+
constructing a tree with differing levels of summarization from the bottom up.
|
147 |
+
At inference time, our RAPTOR model retrieves from this tree, integrating
|
148 |
+
information across lengthy documents at different levels of abstraction.
|
149 |
+
Controlled experiments show that retrieval with recursive summaries offers
|
150 |
+
significant improvements over traditional retrieval-augmented LMs on several
|
151 |
+
tasks. On question-answering tasks that involve complex, multi-step reasoning,
|
152 |
+
we show state-of-the-art results; for example, by coupling RAPTOR retrieval
|
153 |
+
with the use of GPT-4, we can improve the best performance on the QuALITY
|
154 |
+
benchmark by 20% in absolute accuracy.
|
155 |
+
|
156 |
+
## Corrective Retrieval Augmented Generation
|
157 |
+
|
158 |
+
- **Authors:** Shi-Qi Yan, Jia-Chen Gu, Yun Zhu, et al.
|
159 |
+
- **arXiv id:** [2401.15884v2](http://arxiv.org/abs/2401.15884v2) **Published Date:** 2024-01-29
|
160 |
+
- **LangChain:**
|
161 |
+
|
162 |
+
- **Documentation:** [docs/concepts](https://python.langchain.com/docs/concepts)
|
163 |
+
- **Cookbook:** [langgraph_crag](https://github.com/langchain-ai/langchain/blob/master/cookbook/langgraph_crag.ipynb)
|
164 |
+
|
165 |
+
**Abstract:** Large language models (LLMs) inevitably exhibit hallucinations since the
|
166 |
+
accuracy of generated texts cannot be secured solely by the parametric
|
167 |
+
knowledge they encapsulate. Although retrieval-augmented generation (RAG) is a
|
168 |
+
practicable complement to LLMs, it relies heavily on the relevance of retrieved
|
169 |
+
documents, raising concerns about how the model behaves if retrieval goes
|
170 |
+
wrong. To this end, we propose the Corrective Retrieval Augmented Generation
|
171 |
+
(CRAG) to improve the robustness of generation. Specifically, a lightweight
|
172 |
+
retrieval evaluator is designed to assess the overall quality of retrieved
|
173 |
+
documents for a query, returning a confidence degree based on which different
|
174 |
+
knowledge retrieval actions can be triggered. Since retrieval from static and
|
175 |
+
limited corpora can only return sub-optimal documents, large-scale web searches
|
176 |
+
are utilized as an extension for augmenting the retrieval results. Besides, a
|
177 |
+
decompose-then-recompose algorithm is designed for retrieved documents to
|
178 |
+
selectively focus on key information and filter out irrelevant information in
|
179 |
+
them. CRAG is plug-and-play and can be seamlessly coupled with various
|
180 |
+
RAG-based approaches. Experiments on four datasets covering short- and
|
181 |
+
long-form generation tasks show that CRAG can significantly improve the
|
182 |
+
performance of RAG-based approaches.
|
183 |
+
|
184 |
+
## Code Generation with AlphaCodium: From Prompt Engineering to Flow Engineering
|
185 |
+
|
186 |
+
- **Authors:** Tal Ridnik, Dedy Kredo, Itamar Friedman
|
187 |
+
- **arXiv id:** [2401.08500v1](http://arxiv.org/abs/2401.08500v1) **Published Date:** 2024-01-16
|
188 |
+
- **LangChain:**
|
189 |
+
|
190 |
+
- **Documentation:** [docs/concepts](https://python.langchain.com/docs/concepts)
|
191 |
+
|
192 |
+
**Abstract:** Code generation problems differ from common natural language problems - they
|
193 |
+
require matching the exact syntax of the target language, identifying happy
|
194 |
+
paths and edge cases, paying attention to numerous small details in the problem
|
195 |
+
spec, and addressing other code-specific issues and requirements. Hence, many
|
196 |
+
of the optimizations and tricks that have been successful in natural language
|
197 |
+
generation may not be effective for code tasks. In this work, we propose a new
|
198 |
+
approach to code generation by LLMs, which we call AlphaCodium - a test-based,
|
199 |
+
multi-stage, code-oriented iterative flow, that improves the performances of
|
200 |
+
LLMs on code problems. We tested AlphaCodium on a challenging code generation
|
201 |
+
dataset called CodeContests, which includes competitive programming problems
|
202 |
+
from platforms such as Codeforces. The proposed flow consistently and
|
203 |
+
significantly improves results. On the validation set, for example, GPT-4
|
204 |
+
accuracy (pass@5) increased from 19% with a single well-designed direct prompt
|
205 |
+
to 44% with the AlphaCodium flow. Many of the principles and best practices
|
206 |
+
acquired in this work, we believe, are broadly applicable to general code
|
207 |
+
generation tasks. Full implementation is available at:
|
208 |
+
https://github.com/Codium-ai/AlphaCodium
|
209 |
+
|
210 |
+
## Mixtral of Experts
|
211 |
+
|
212 |
+
- **Authors:** Albert Q. Jiang, Alexandre Sablayrolles, Antoine Roux, et al.
|
213 |
+
- **arXiv id:** [2401.04088v1](http://arxiv.org/abs/2401.04088v1) **Published Date:** 2024-01-08
|
214 |
+
- **LangChain:**
|
215 |
+
|
216 |
+
- **Cookbook:** [together_ai](https://github.com/langchain-ai/langchain/blob/master/cookbook/together_ai.ipynb)
|
217 |
+
|
218 |
+
**Abstract:** We introduce Mixtral 8x7B, a Sparse Mixture of Experts (SMoE) language model.
|
219 |
+
Mixtral has the same architecture as Mistral 7B, with the difference that each
|
220 |
+
layer is composed of 8 feedforward blocks (i.e. experts). For every token, at
|
221 |
+
each layer, a router network selects two experts to process the current state
|
222 |
+
and combine their outputs. Even though each token only sees two experts, the
|
223 |
+
selected experts can be different at each timestep. As a result, each token has
|
224 |
+
access to 47B parameters, but only uses 13B active parameters during inference.
|
225 |
+
Mixtral was trained with a context size of 32k tokens and it outperforms or
|
226 |
+
matches Llama 2 70B and GPT-3.5 across all evaluated benchmarks. In particular,
|
227 |
+
Mixtral vastly outperforms Llama 2 70B on mathematics, code generation, and
|
228 |
+
multilingual benchmarks. We also provide a model fine-tuned to follow
|
229 |
+
instructions, Mixtral 8x7B - Instruct, that surpasses GPT-3.5 Turbo,
|
230 |
+
Claude-2.1, Gemini Pro, and Llama 2 70B - chat model on human benchmarks. Both
|
231 |
+
the base and instruct models are released under the Apache 2.0 license.
|
232 |
+
|
233 |
+
## Dense X Retrieval: What Retrieval Granularity Should We Use?
|
234 |
+
|
235 |
+
- **Authors:** Tong Chen, Hongwei Wang, Sihao Chen, et al.
|
236 |
+
- **arXiv id:** [2312.06648v2](http://arxiv.org/abs/2312.06648v2) **Published Date:** 2023-12-11
|
237 |
+
- **LangChain:**
|
238 |
+
|
239 |
+
- **Template:** [propositional-retrieval](https://python.langchain.com/docs/templates/propositional-retrieval)
|
240 |
+
|
241 |
+
**Abstract:** Dense retrieval has become a prominent method to obtain relevant context or
|
242 |
+
world knowledge in open-domain NLP tasks. When we use a learned dense retriever
|
243 |
+
on a retrieval corpus at inference time, an often-overlooked design choice is
|
244 |
+
the retrieval unit in which the corpus is indexed, e.g. document, passage, or
|
245 |
+
sentence. We discover that the retrieval unit choice significantly impacts the
|
246 |
+
performance of both retrieval and downstream tasks. Distinct from the typical
|
247 |
+
approach of using passages or sentences, we introduce a novel retrieval unit,
|
248 |
+
proposition, for dense retrieval. Propositions are defined as atomic
|
249 |
+
expressions within text, each encapsulating a distinct factoid and presented in
|
250 |
+
a concise, self-contained natural language format. We conduct an empirical
|
251 |
+
comparison of different retrieval granularity. Our results reveal that
|
252 |
+
proposition-based retrieval significantly outperforms traditional passage or
|
253 |
+
sentence-based methods in dense retrieval. Moreover, retrieval by proposition
|
254 |
+
also enhances the performance of downstream QA tasks, since the retrieved texts
|
255 |
+
are more condensed with question-relevant information, reducing the need for
|
256 |
+
lengthy input tokens and minimizing the inclusion of extraneous, irrelevant
|
257 |
+
information.
|
258 |
+
|
259 |
+
## Chain-of-Note: Enhancing Robustness in Retrieval-Augmented Language Models
|
260 |
+
|
261 |
+
- **Authors:** Wenhao Yu, Hongming Zhang, Xiaoman Pan, et al.
|
262 |
+
- **arXiv id:** [2311.09210v1](http://arxiv.org/abs/2311.09210v1) **Published Date:** 2023-11-15
|
263 |
+
- **LangChain:**
|
264 |
+
|
265 |
+
- **Template:** [chain-of-note-wiki](https://python.langchain.com/docs/templates/chain-of-note-wiki)
|
266 |
+
|
267 |
+
**Abstract:** Retrieval-augmented language models (RALMs) represent a substantial
|
268 |
+
advancement in the capabilities of large language models, notably in reducing
|
269 |
+
factual hallucination by leveraging external knowledge sources. However, the
|
270 |
+
reliability of the retrieved information is not always guaranteed. The
|
271 |
+
retrieval of irrelevant data can lead to misguided responses, and potentially
|
272 |
+
causing the model to overlook its inherent knowledge, even when it possesses
|
273 |
+
adequate information to address the query. Moreover, standard RALMs often
|
274 |
+
struggle to assess whether they possess adequate knowledge, both intrinsic and
|
275 |
+
retrieved, to provide an accurate answer. In situations where knowledge is
|
276 |
+
lacking, these systems should ideally respond with "unknown" when the answer is
|
277 |
+
unattainable. In response to these challenges, we introduces Chain-of-Noting
|
278 |
+
(CoN), a novel approach aimed at improving the robustness of RALMs in facing
|
279 |
+
noisy, irrelevant documents and in handling unknown scenarios. The core idea of
|
280 |
+
CoN is to generate sequential reading notes for retrieved documents, enabling a
|
281 |
+
thorough evaluation of their relevance to the given question and integrating
|
282 |
+
this information to formulate the final answer. We employed ChatGPT to create
|
283 |
+
training data for CoN, which was subsequently trained on an LLaMa-2 7B model.
|
284 |
+
Our experiments across four open-domain QA benchmarks show that RALMs equipped
|
285 |
+
with CoN significantly outperform standard RALMs. Notably, CoN achieves an
|
286 |
+
average improvement of +7.9 in EM score given entirely noisy retrieved
|
287 |
+
documents and +10.5 in rejection rates for real-time questions that fall
|
288 |
+
outside the pre-training knowledge scope.
|
289 |
+
|
290 |
+
## Self-RAG: Learning to Retrieve, Generate, and Critique through Self-Reflection
|
291 |
+
|
292 |
+
- **Authors:** Akari Asai, Zeqiu Wu, Yizhong Wang, et al.
|
293 |
+
- **arXiv id:** [2310.11511v1](http://arxiv.org/abs/2310.11511v1) **Published Date:** 2023-10-17
|
294 |
+
- **LangChain:**
|
295 |
+
|
296 |
+
- **Documentation:** [docs/concepts](https://python.langchain.com/docs/concepts)
|
297 |
+
- **Cookbook:** [langgraph_self_rag](https://github.com/langchain-ai/langchain/blob/master/cookbook/langgraph_self_rag.ipynb)
|
298 |
+
|
299 |
+
**Abstract:** Despite their remarkable capabilities, large language models (LLMs) often
|
300 |
+
produce responses containing factual inaccuracies due to their sole reliance on
|
301 |
+
the parametric knowledge they encapsulate. Retrieval-Augmented Generation
|
302 |
+
(RAG), an ad hoc approach that augments LMs with retrieval of relevant
|
303 |
+
knowledge, decreases such issues. However, indiscriminately retrieving and
|
304 |
+
incorporating a fixed number of retrieved passages, regardless of whether
|
305 |
+
retrieval is necessary, or passages are relevant, diminishes LM versatility or
|
306 |
+
can lead to unhelpful response generation. We introduce a new framework called
|
307 |
+
Self-Reflective Retrieval-Augmented Generation (Self-RAG) that enhances an LM's
|
308 |
+
quality and factuality through retrieval and self-reflection. Our framework
|
309 |
+
trains a single arbitrary LM that adaptively retrieves passages on-demand, and
|
310 |
+
generates and reflects on retrieved passages and its own generations using
|
311 |
+
special tokens, called reflection tokens. Generating reflection tokens makes
|
312 |
+
the LM controllable during the inference phase, enabling it to tailor its
|
313 |
+
behavior to diverse task requirements. Experiments show that Self-RAG (7B and
|
314 |
+
13B parameters) significantly outperforms state-of-the-art LLMs and
|
315 |
+
retrieval-augmented models on a diverse set of tasks. Specifically, Self-RAG
|
316 |
+
outperforms ChatGPT and retrieval-augmented Llama2-chat on Open-domain QA,
|
317 |
+
reasoning and fact verification tasks, and it shows significant gains in
|
318 |
+
improving factuality and citation accuracy for long-form generations relative
|
319 |
+
to these models.
|
320 |
+
|
321 |
+
## Take a Step Back: Evoking Reasoning via Abstraction in Large Language Models
|
322 |
+
|
323 |
+
- **Authors:** Huaixiu Steven Zheng, Swaroop Mishra, Xinyun Chen, et al.
|
324 |
+
- **arXiv id:** [2310.06117v2](http://arxiv.org/abs/2310.06117v2) **Published Date:** 2023-10-09
|
325 |
+
- **LangChain:**
|
326 |
+
|
327 |
+
- **Documentation:** [docs/concepts](https://python.langchain.com/docs/concepts)
|
328 |
+
- **Template:** [stepback-qa-prompting](https://python.langchain.com/docs/templates/stepback-qa-prompting)
|
329 |
+
- **Cookbook:** [stepback-qa](https://github.com/langchain-ai/langchain/blob/master/cookbook/stepback-qa.ipynb)
|
330 |
+
|
331 |
+
**Abstract:** We present Step-Back Prompting, a simple prompting technique that enables
|
332 |
+
LLMs to do abstractions to derive high-level concepts and first principles from
|
333 |
+
instances containing specific details. Using the concepts and principles to
|
334 |
+
guide reasoning, LLMs significantly improve their abilities in following a
|
335 |
+
correct reasoning path towards the solution. We conduct experiments of
|
336 |
+
Step-Back Prompting with PaLM-2L, GPT-4 and Llama2-70B models, and observe
|
337 |
+
substantial performance gains on various challenging reasoning-intensive tasks
|
338 |
+
including STEM, Knowledge QA, and Multi-Hop Reasoning. For instance, Step-Back
|
339 |
+
Prompting improves PaLM-2L performance on MMLU (Physics and Chemistry) by 7%
|
340 |
+
and 11% respectively, TimeQA by 27%, and MuSiQue by 7%.
|
341 |
+
|
342 |
+
## Skeleton-of-Thought: Prompting LLMs for Efficient Parallel Generation
|
343 |
+
|
344 |
+
- **Authors:** Xuefei Ning, Zinan Lin, Zixuan Zhou, et al.
|
345 |
+
- **arXiv id:** [2307.15337v3](http://arxiv.org/abs/2307.15337v3) **Published Date:** 2023-07-28
|
346 |
+
- **LangChain:**
|
347 |
+
|
348 |
+
- **Template:** [skeleton-of-thought](https://python.langchain.com/docs/templates/skeleton-of-thought)
|
349 |
+
|
350 |
+
**Abstract:** This work aims at decreasing the end-to-end generation latency of large
|
351 |
+
language models (LLMs). One of the major causes of the high generation latency
|
352 |
+
is the sequential decoding approach adopted by almost all state-of-the-art
|
353 |
+
LLMs. In this work, motivated by the thinking and writing process of humans, we
|
354 |
+
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
|
355 |
+
skeleton of the answer, and then conducts parallel API calls or batched
|
356 |
+
decoding to complete the contents of each skeleton point in parallel. Not only
|
357 |
+
does SoT provide considerable speed-ups across 12 LLMs, but it can also
|
358 |
+
potentially improve the answer quality on several question categories. SoT is
|
359 |
+
an initial attempt at data-centric optimization for inference efficiency, and
|
360 |
+
showcases the potential of eliciting high-quality answers by explicitly
|
361 |
+
planning the answer structure in language.
|
362 |
+
|
363 |
+
## Llama 2: Open Foundation and Fine-Tuned Chat Models
|
364 |
+
|
365 |
+
- **Authors:** Hugo Touvron, Louis Martin, Kevin Stone, et al.
|
366 |
+
- **arXiv id:** [2307.09288v2](http://arxiv.org/abs/2307.09288v2) **Published Date:** 2023-07-18
|
367 |
+
- **LangChain:**
|
368 |
+
|
369 |
+
- **Cookbook:** [Semi_Structured_RAG](https://github.com/langchain-ai/langchain/blob/master/cookbook/Semi_Structured_RAG.ipynb)
|
370 |
+
|
371 |
+
**Abstract:** In this work, we develop and release Llama 2, a collection of pretrained and
|
372 |
+
fine-tuned large language models (LLMs) ranging in scale from 7 billion to 70
|
373 |
+
billion parameters. Our fine-tuned LLMs, called Llama 2-Chat, are optimized for
|
374 |
+
dialogue use cases. Our models outperform open-source chat models on most
|
375 |
+
benchmarks we tested, and based on our human evaluations for helpfulness and
|
376 |
+
safety, may be a suitable substitute for closed-source models. We provide a
|
377 |
+
detailed description of our approach to fine-tuning and safety improvements of
|
378 |
+
Llama 2-Chat in order to enable the community to build on our work and
|
379 |
+
contribute to the responsible development of LLMs.
|
380 |
+
|
381 |
+
## Lost in the Middle: How Language Models Use Long Contexts
|
382 |
+
|
383 |
+
- **Authors:** Nelson F. Liu, Kevin Lin, John Hewitt, et al.
|
384 |
+
- **arXiv id:** [2307.03172v3](http://arxiv.org/abs/2307.03172v3) **Published Date:** 2023-07-06
|
385 |
+
- **LangChain:**
|
386 |
+
|
387 |
+
- **Documentation:** [docs/how_to/long_context_reorder](https://python.langchain.com/docs/how_to/long_context_reorder)
|
388 |
+
|
389 |
+
**Abstract:** While recent language models have the ability to take long contexts as input,
|
390 |
+
relatively little is known about how well they use longer context. We analyze
|
391 |
+
the performance of language models on two tasks that require identifying
|
392 |
+
relevant information in their input contexts: multi-document question answering
|
393 |
+
and key-value retrieval. We find that performance can degrade significantly
|
394 |
+
when changing the position of relevant information, indicating that current
|
395 |
+
language models do not robustly make use of information in long input contexts.
|
396 |
+
In particular, we observe that performance is often highest when relevant
|
397 |
+
information occurs at the beginning or end of the input context, and
|
398 |
+
significantly degrades when models must access relevant information in the
|
399 |
+
middle of long contexts, even for explicitly long-context models. Our analysis
|
400 |
+
provides a better understanding of how language models use their input context
|
401 |
+
and provides new evaluation protocols for future long-context language models.
|
402 |
+
|
403 |
+
## Query Rewriting for Retrieval-Augmented Large Language Models
|
404 |
+
|
405 |
+
- **Authors:** Xinbei Ma, Yeyun Gong, Pengcheng He, et al.
|
406 |
+
- **arXiv id:** [2305.14283v3](http://arxiv.org/abs/2305.14283v3) **Published Date:** 2023-05-23
|
407 |
+
- **LangChain:**
|
408 |
+
|
409 |
+
- **Template:** [rewrite-retrieve-read](https://python.langchain.com/docs/templates/rewrite-retrieve-read)
|
410 |
+
- **Cookbook:** [rewrite](https://github.com/langchain-ai/langchain/blob/master/cookbook/rewrite.ipynb)
|
411 |
+
|
412 |
+
**Abstract:** Large Language Models (LLMs) play powerful, black-box readers in the
|
413 |
+
retrieve-then-read pipeline, making remarkable progress in knowledge-intensive
|
414 |
+
tasks. This work introduces a new framework, Rewrite-Retrieve-Read instead of
|
415 |
+
the previous retrieve-then-read for the retrieval-augmented LLMs from the
|
416 |
+
perspective of the query rewriting. Unlike prior studies focusing on adapting
|
417 |
+
either the retriever or the reader, our approach pays attention to the
|
418 |
+
adaptation of the search query itself, for there is inevitably a gap between
|
419 |
+
the input text and the needed knowledge in retrieval. We first prompt an LLM to
|
420 |
+
generate the query, then use a web search engine to retrieve contexts.
|
421 |
+
Furthermore, to better align the query to the frozen modules, we propose a
|
422 |
+
trainable scheme for our pipeline. A small language model is adopted as a
|
423 |
+
trainable rewriter to cater to the black-box LLM reader. The rewriter is
|
424 |
+
trained using the feedback of the LLM reader by reinforcement learning.
|
425 |
+
Evaluation is conducted on downstream tasks, open-domain QA and multiple-choice
|
426 |
+
QA. Experiments results show consistent performance improvement, indicating
|
427 |
+
that our framework is proven effective and scalable, and brings a new framework
|
428 |
+
for retrieval-augmented LLM.
|
429 |
+
|
430 |
+
## Large Language Model Guided Tree-of-Thought
|
431 |
+
|
432 |
+
- **Authors:** Jieyi Long
|
433 |
+
- **arXiv id:** [2305.08291v1](http://arxiv.org/abs/2305.08291v1) **Published Date:** 2023-05-15
|
434 |
+
- **LangChain:**
|
435 |
+
|
436 |
+
- **API Reference:** [langchain_experimental.tot](https://python.langchain.com/api_reference/experimental/tot.html)
|
437 |
+
- **Cookbook:** [tree_of_thought](https://github.com/langchain-ai/langchain/blob/master/cookbook/tree_of_thought.ipynb)
|
438 |
+
|
439 |
+
**Abstract:** In this paper, we introduce the Tree-of-Thought (ToT) framework, a novel
|
440 |
+
approach aimed at improving the problem-solving capabilities of auto-regressive
|
441 |
+
large language models (LLMs). The ToT technique is inspired by the human mind's
|
442 |
+
approach for solving complex reasoning tasks through trial and error. In this
|
443 |
+
process, the human mind explores the solution space through a tree-like thought
|
444 |
+
process, allowing for backtracking when necessary. To implement ToT as a
|
445 |
+
software system, we augment an LLM with additional modules including a prompter
|
446 |
+
agent, a checker module, a memory module, and a ToT controller. In order to
|
447 |
+
solve a given problem, these modules engage in a multi-round conversation with
|
448 |
+
the LLM. The memory module records the conversation and state history of the
|
449 |
+
problem solving process, which allows the system to backtrack to the previous
|
450 |
+
steps of the thought-process and explore other directions from there. To verify
|
451 |
+
the effectiveness of the proposed technique, we implemented a ToT-based solver
|
452 |
+
for the Sudoku Puzzle. Experimental results show that the ToT framework can
|
453 |
+
significantly increase the success rate of Sudoku puzzle solving. Our
|
454 |
+
implementation of the ToT-based Sudoku solver is available on [GitHub](https://github.com/jieyilong/tree-of-thought-puzzle-solver).
|
455 |
+
|
456 |
+
## Plan-and-Solve Prompting: Improving Zero-Shot Chain-of-Thought Reasoning by Large Language Models
|
457 |
+
|
458 |
+
- **Authors:** Lei Wang, Wanyu Xu, Yihuai Lan, et al.
|
459 |
+
- **arXiv id:** [2305.04091v3](http://arxiv.org/abs/2305.04091v3) **Published Date:** 2023-05-06
|
460 |
+
- **LangChain:**
|
461 |
+
|
462 |
+
- **Cookbook:** [plan_and_execute_agent](https://github.com/langchain-ai/langchain/blob/master/cookbook/plan_and_execute_agent.ipynb)
|
463 |
+
|
464 |
+
**Abstract:** Large language models (LLMs) have recently been shown to deliver impressive
|
465 |
+
performance in various NLP tasks. To tackle multi-step reasoning tasks,
|
466 |
+
few-shot chain-of-thought (CoT) prompting includes a few manually crafted
|
467 |
+
step-by-step reasoning demonstrations which enable LLMs to explicitly generate
|
468 |
+
reasoning steps and improve their reasoning task accuracy. To eliminate the
|
469 |
+
manual effort, Zero-shot-CoT concatenates the target problem statement with
|
470 |
+
"Let's think step by step" as an input prompt to LLMs. Despite the success of
|
471 |
+
Zero-shot-CoT, it still suffers from three pitfalls: calculation errors,
|
472 |
+
missing-step errors, and semantic misunderstanding errors. To address the
|
473 |
+
missing-step errors, we propose Plan-and-Solve (PS) Prompting. It consists of
|
474 |
+
two components: first, devising a plan to divide the entire task into smaller
|
475 |
+
subtasks, and then carrying out the subtasks according to the plan. To address
|
476 |
+
the calculation errors and improve the quality of generated reasoning steps, we
|
477 |
+
extend PS prompting with more detailed instructions and derive PS+ prompting.
|
478 |
+
We evaluate our proposed prompting strategy on ten datasets across three
|
479 |
+
reasoning problems. The experimental results over GPT-3 show that our proposed
|
480 |
+
zero-shot prompting consistently outperforms Zero-shot-CoT across all datasets
|
481 |
+
by a large margin, is comparable to or exceeds Zero-shot-Program-of-Thought
|
482 |
+
Prompting, and has comparable performance with 8-shot CoT prompting on the math
|
483 |
+
reasoning problem. The code can be found at
|
484 |
+
https://github.com/AGI-Edgerunners/Plan-and-Solve-Prompting.
|
485 |
+
|
486 |
+
## Zero-Shot Listwise Document Reranking with a Large Language Model
|
487 |
+
|
488 |
+
- **Authors:** Xueguang Ma, Xinyu Zhang, Ronak Pradeep, et al.
|
489 |
+
- **arXiv id:** [2305.02156v1](http://arxiv.org/abs/2305.02156v1) **Published Date:** 2023-05-03
|
490 |
+
- **LangChain:**
|
491 |
+
|
492 |
+
- **Documentation:** [docs/how_to/contextual_compression](https://python.langchain.com/docs/how_to/contextual_compression)
|
493 |
+
- **API Reference:** [langchain...LLMListwiseRerank](https://python.langchain.com/api_reference/langchain/retrievers/langchain.retrievers.document_compressors.listwise_rerank.LLMListwiseRerank.html#)
|
494 |
+
|
495 |
+
**Abstract:** Supervised ranking methods based on bi-encoder or cross-encoder architectures
|
496 |
+
have shown success in multi-stage text ranking tasks, but they require large
|
497 |
+
amounts of relevance judgments as training data. In this work, we propose
|
498 |
+
Listwise Reranker with a Large Language Model (LRL), which achieves strong
|
499 |
+
reranking effectiveness without using any task-specific training data.
|
500 |
+
Different from the existing pointwise ranking methods, where documents are
|
501 |
+
scored independently and ranked according to the scores, LRL directly generates
|
502 |
+
a reordered list of document identifiers given the candidate documents.
|
503 |
+
Experiments on three TREC web search datasets demonstrate that LRL not only
|
504 |
+
outperforms zero-shot pointwise methods when reranking first-stage retrieval
|
505 |
+
results, but can also act as a final-stage reranker to improve the top-ranked
|
506 |
+
results of a pointwise method for improved efficiency. Additionally, we apply
|
507 |
+
our approach to subsets of MIRACL, a recent multilingual retrieval dataset,
|
508 |
+
with results showing its potential to generalize across different languages.
|
509 |
+
|
510 |
+
## Visual Instruction Tuning
|
511 |
+
|
512 |
+
- **Authors:** Haotian Liu, Chunyuan Li, Qingyang Wu, et al.
|
513 |
+
- **arXiv id:** [2304.08485v2](http://arxiv.org/abs/2304.08485v2) **Published Date:** 2023-04-17
|
514 |
+
- **LangChain:**
|
515 |
+
|
516 |
+
- **Cookbook:** [Semi_structured_multi_modal_RAG_LLaMA2](https://github.com/langchain-ai/langchain/blob/master/cookbook/Semi_structured_multi_modal_RAG_LLaMA2.ipynb), [Semi_structured_and_multi_modal_RAG](https://github.com/langchain-ai/langchain/blob/master/cookbook/Semi_structured_and_multi_modal_RAG.ipynb)
|
517 |
+
|
518 |
+
**Abstract:** Instruction tuning large language models (LLMs) using machine-generated
|
519 |
+
instruction-following data has improved zero-shot capabilities on new tasks,
|
520 |
+
but the idea is less explored in the multimodal field. In this paper, we
|
521 |
+
present the first attempt to use language-only GPT-4 to generate multimodal
|
522 |
+
language-image instruction-following data. By instruction tuning on such
|
523 |
+
generated data, we introduce LLaVA: Large Language and Vision Assistant, an
|
524 |
+
end-to-end trained large multimodal model that connects a vision encoder and
|
525 |
+
LLM for general-purpose visual and language understanding.Our early experiments
|
526 |
+
show that LLaVA demonstrates impressive multimodel chat abilities, sometimes
|
527 |
+
exhibiting the behaviors of multimodal GPT-4 on unseen images/instructions, and
|
528 |
+
yields a 85.1% relative score compared with GPT-4 on a synthetic multimodal
|
529 |
+
instruction-following dataset. When fine-tuned on Science QA, the synergy of
|
530 |
+
LLaVA and GPT-4 achieves a new state-of-the-art accuracy of 92.53%. We make
|
531 |
+
GPT-4 generated visual instruction tuning data, our model and code base
|
532 |
+
publicly available.
|
533 |
+
|
534 |
+
## Generative Agents: Interactive Simulacra of Human Behavior
|
535 |
+
|
536 |
+
- **Authors:** Joon Sung Park, Joseph C. O'Brien, Carrie J. Cai, et al.
|
537 |
+
- **arXiv id:** [2304.03442v2](http://arxiv.org/abs/2304.03442v2) **Published Date:** 2023-04-07
|
538 |
+
- **LangChain:**
|
539 |
+
|
540 |
+
- **Cookbook:** [generative_agents_interactive_simulacra_of_human_behavior](https://github.com/langchain-ai/langchain/blob/master/cookbook/generative_agents_interactive_simulacra_of_human_behavior.ipynb), [multiagent_bidding](https://github.com/langchain-ai/langchain/blob/master/cookbook/multiagent_bidding.ipynb)
|
541 |
+
|
542 |
+
**Abstract:** Believable proxies of human behavior can empower interactive applications
|
543 |
+
ranging from immersive environments to rehearsal spaces for interpersonal
|
544 |
+
communication to prototyping tools. In this paper, we introduce generative
|
545 |
+
agents--computational software agents that simulate believable human behavior.
|
546 |
+
Generative agents wake up, cook breakfast, and head to work; artists paint,
|
547 |
+
while authors write; they form opinions, notice each other, and initiate
|
548 |
+
conversations; they remember and reflect on days past as they plan the next
|
549 |
+
day. To enable generative agents, we describe an architecture that extends a
|
550 |
+
large language model to store a complete record of the agent's experiences
|
551 |
+
using natural language, synthesize those memories over time into higher-level
|
552 |
+
reflections, and retrieve them dynamically to plan behavior. We instantiate
|
553 |
+
generative agents to populate an interactive sandbox environment inspired by
|
554 |
+
The Sims, where end users can interact with a small town of twenty five agents
|
555 |
+
using natural language. In an evaluation, these generative agents produce
|
556 |
+
believable individual and emergent social behaviors: for example, starting with
|
557 |
+
only a single user-specified notion that one agent wants to throw a Valentine's
|
558 |
+
Day party, the agents autonomously spread invitations to the party over the
|
559 |
+
next two days, make new acquaintances, ask each other out on dates to the
|
560 |
+
party, and coordinate to show up for the party together at the right time. We
|
561 |
+
demonstrate through ablation that the components of our agent
|
562 |
+
architecture--observation, planning, and reflection--each contribute critically
|
563 |
+
to the believability of agent behavior. By fusing large language models with
|
564 |
+
computational, interactive agents, this work introduces architectural and
|
565 |
+
interaction patterns for enabling believable simulations of human behavior.
|
566 |
+
|
567 |
+
## CAMEL: Communicative Agents for "Mind" Exploration of Large Language Model Society
|
568 |
+
|
569 |
+
- **Authors:** Guohao Li, Hasan Abed Al Kader Hammoud, Hani Itani, et al.
|
570 |
+
- **arXiv id:** [2303.17760v2](http://arxiv.org/abs/2303.17760v2) **Published Date:** 2023-03-31
|
571 |
+
- **LangChain:**
|
572 |
+
|
573 |
+
- **Cookbook:** [camel_role_playing](https://github.com/langchain-ai/langchain/blob/master/cookbook/camel_role_playing.ipynb)
|
574 |
+
|
575 |
+
**Abstract:** The rapid advancement of chat-based language models has led to remarkable
|
576 |
+
progress in complex task-solving. However, their success heavily relies on
|
577 |
+
human input to guide the conversation, which can be challenging and
|
578 |
+
time-consuming. This paper explores the potential of building scalable
|
579 |
+
techniques to facilitate autonomous cooperation among communicative agents, and
|
580 |
+
provides insight into their "cognitive" processes. To address the challenges of
|
581 |
+
achieving autonomous cooperation, we propose a novel communicative agent
|
582 |
+
framework named role-playing. Our approach involves using inception prompting
|
583 |
+
to guide chat agents toward task completion while maintaining consistency with
|
584 |
+
human intentions. We showcase how role-playing can be used to generate
|
585 |
+
conversational data for studying the behaviors and capabilities of a society of
|
586 |
+
agents, providing a valuable resource for investigating conversational language
|
587 |
+
models. In particular, we conduct comprehensive studies on
|
588 |
+
instruction-following cooperation in multi-agent settings. Our contributions
|
589 |
+
include introducing a novel communicative agent framework, offering a scalable
|
590 |
+
approach for studying the cooperative behaviors and capabilities of multi-agent
|
591 |
+
systems, and open-sourcing our library to support research on communicative
|
592 |
+
agents and beyond: https://github.com/camel-ai/camel.
|
593 |
+
|
594 |
+
## HuggingGPT: Solving AI Tasks with ChatGPT and its Friends in Hugging Face
|
595 |
+
|
596 |
+
- **Authors:** Yongliang Shen, Kaitao Song, Xu Tan, et al.
|
597 |
+
- **arXiv id:** [2303.17580v4](http://arxiv.org/abs/2303.17580v4) **Published Date:** 2023-03-30
|
598 |
+
- **LangChain:**
|
599 |
+
|
600 |
+
- **API Reference:** [langchain_experimental.autonomous_agents](https://python.langchain.com/api_reference/experimental/autonomous_agents.html)
|
601 |
+
- **Cookbook:** [hugginggpt](https://github.com/langchain-ai/langchain/blob/master/cookbook/hugginggpt.ipynb)
|
602 |
+
|
603 |
+
**Abstract:** Solving complicated AI tasks with different domains and modalities is a key
|
604 |
+
step toward artificial general intelligence. While there are numerous AI models
|
605 |
+
available for various domains and modalities, they cannot handle complicated AI
|
606 |
+
tasks autonomously. Considering large language models (LLMs) have exhibited
|
607 |
+
exceptional abilities in language understanding, generation, interaction, and
|
608 |
+
reasoning, we advocate that LLMs could act as a controller to manage existing
|
609 |
+
AI models to solve complicated AI tasks, with language serving as a generic
|
610 |
+
interface to empower this. Based on this philosophy, we present HuggingGPT, an
|
611 |
+
LLM-powered agent that leverages LLMs (e.g., ChatGPT) to connect various AI
|
612 |
+
models in machine learning communities (e.g., Hugging Face) to solve AI tasks.
|
613 |
+
Specifically, we use ChatGPT to conduct task planning when receiving a user
|
614 |
+
request, select models according to their function descriptions available in
|
615 |
+
Hugging Face, execute each subtask with the selected AI model, and summarize
|
616 |
+
the response according to the execution results. By leveraging the strong
|
617 |
+
language capability of ChatGPT and abundant AI models in Hugging Face,
|
618 |
+
HuggingGPT can tackle a wide range of sophisticated AI tasks spanning different
|
619 |
+
modalities and domains and achieve impressive results in language, vision,
|
620 |
+
speech, and other challenging tasks, which paves a new way towards the
|
621 |
+
realization of artificial general intelligence.
|
622 |
+
|
623 |
+
## A Watermark for Large Language Models
|
624 |
+
|
625 |
+
- **Authors:** John Kirchenbauer, Jonas Geiping, Yuxin Wen, et al.
|
626 |
+
- **arXiv id:** [2301.10226v4](http://arxiv.org/abs/2301.10226v4) **Published Date:** 2023-01-24
|
627 |
+
- **LangChain:**
|
628 |
+
|
629 |
+
- **API Reference:** [langchain_community...OCIModelDeploymentTGI](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.oci_data_science_model_deployment_endpoint.OCIModelDeploymentTGI.html#langchain_community.llms.oci_data_science_model_deployment_endpoint.OCIModelDeploymentTGI), [langchain_huggingface...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint), [langchain_community...HuggingFaceTextGenInference](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference.html#langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference), [langchain_community...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint)
|
630 |
+
|
631 |
+
**Abstract:** Potential harms of large language models can be mitigated by watermarking
|
632 |
+
model output, i.e., embedding signals into generated text that are invisible to
|
633 |
+
humans but algorithmically detectable from a short span of tokens. We propose a
|
634 |
+
watermarking framework for proprietary language models. The watermark can be
|
635 |
+
embedded with negligible impact on text quality, and can be detected using an
|
636 |
+
efficient open-source algorithm without access to the language model API or
|
637 |
+
parameters. The watermark works by selecting a randomized set of "green" tokens
|
638 |
+
before a word is generated, and then softly promoting use of green tokens
|
639 |
+
during sampling. We propose a statistical test for detecting the watermark with
|
640 |
+
interpretable p-values, and derive an information-theoretic framework for
|
641 |
+
analyzing the sensitivity of the watermark. We test the watermark using a
|
642 |
+
multi-billion parameter model from the Open Pretrained Transformer (OPT)
|
643 |
+
family, and discuss robustness and security.
|
644 |
+
|
645 |
+
## Precise Zero-Shot Dense Retrieval without Relevance Labels
|
646 |
+
|
647 |
+
- **Authors:** Luyu Gao, Xueguang Ma, Jimmy Lin, et al.
|
648 |
+
- **arXiv id:** [2212.10496v1](http://arxiv.org/abs/2212.10496v1) **Published Date:** 2022-12-20
|
649 |
+
- **LangChain:**
|
650 |
+
|
651 |
+
- **Documentation:** [docs/concepts](https://python.langchain.com/docs/concepts)
|
652 |
+
- **API Reference:** [langchain...HypotheticalDocumentEmbedder](https://api.python.langchain.com/en/latest/chains/langchain.chains.hyde.base.HypotheticalDocumentEmbedder.html#langchain.chains.hyde.base.HypotheticalDocumentEmbedder)
|
653 |
+
- **Template:** [hyde](https://python.langchain.com/docs/templates/hyde)
|
654 |
+
- **Cookbook:** [hypothetical_document_embeddings](https://github.com/langchain-ai/langchain/blob/master/cookbook/hypothetical_document_embeddings.ipynb)
|
655 |
+
|
656 |
+
**Abstract:** While dense retrieval has been shown effective and efficient across tasks and
|
657 |
+
languages, it remains difficult to create effective fully zero-shot dense
|
658 |
+
retrieval systems when no relevance label is available. In this paper, we
|
659 |
+
recognize the difficulty of zero-shot learning and encoding relevance. Instead,
|
660 |
+
we propose to pivot through Hypothetical Document Embeddings~(HyDE). Given a
|
661 |
+
query, HyDE first zero-shot instructs an instruction-following language model
|
662 |
+
(e.g. InstructGPT) to generate a hypothetical document. The document captures
|
663 |
+
relevance patterns but is unreal and may contain false details. Then, an
|
664 |
+
unsupervised contrastively learned encoder~(e.g. Contriever) encodes the
|
665 |
+
document into an embedding vector. This vector identifies a neighborhood in the
|
666 |
+
corpus embedding space, where similar real documents are retrieved based on
|
667 |
+
vector similarity. This second step ground the generated document to the actual
|
668 |
+
corpus, with the encoder's dense bottleneck filtering out the incorrect
|
669 |
+
details. Our experiments show that HyDE significantly outperforms the
|
670 |
+
state-of-the-art unsupervised dense retriever Contriever and shows strong
|
671 |
+
performance comparable to fine-tuned retrievers, across various tasks (e.g. web
|
672 |
+
search, QA, fact verification) and languages~(e.g. sw, ko, ja).
|
673 |
+
|
674 |
+
## Constitutional AI: Harmlessness from AI Feedback
|
675 |
+
|
676 |
+
- **Authors:** Yuntao Bai, Saurav Kadavath, Sandipan Kundu, et al.
|
677 |
+
- **arXiv id:** [2212.08073v1](http://arxiv.org/abs/2212.08073v1) **Published Date:** 2022-12-15
|
678 |
+
- **LangChain:**
|
679 |
+
|
680 |
+
- **Documentation:** [docs/versions/migrating_chains/constitutional_chain](https://python.langchain.com/docs/versions/migrating_chains/constitutional_chain)
|
681 |
+
|
682 |
+
**Abstract:** As AI systems become more capable, we would like to enlist their help to
|
683 |
+
supervise other AIs. We experiment with methods for training a harmless AI
|
684 |
+
assistant through self-improvement, without any human labels identifying
|
685 |
+
harmful outputs. The only human oversight is provided through a list of rules
|
686 |
+
or principles, and so we refer to the method as 'Constitutional AI'. The
|
687 |
+
process involves both a supervised learning and a reinforcement learning phase.
|
688 |
+
In the supervised phase we sample from an initial model, then generate
|
689 |
+
self-critiques and revisions, and then finetune the original model on revised
|
690 |
+
responses. In the RL phase, we sample from the finetuned model, use a model to
|
691 |
+
evaluate which of the two samples is better, and then train a preference model
|
692 |
+
from this dataset of AI preferences. We then train with RL using the preference
|
693 |
+
model as the reward signal, i.e. we use 'RL from AI Feedback' (RLAIF). As a
|
694 |
+
result we are able to train a harmless but non-evasive AI assistant that
|
695 |
+
engages with harmful queries by explaining its objections to them. Both the SL
|
696 |
+
and RL methods can leverage chain-of-thought style reasoning to improve the
|
697 |
+
human-judged performance and transparency of AI decision making. These methods
|
698 |
+
make it possible to control AI behavior more precisely and with far fewer human
|
699 |
+
labels.
|
700 |
+
|
701 |
+
## Robust and Explainable Identification of Logical Fallacies in Natural Language Arguments
|
702 |
+
|
703 |
+
- **Authors:** Zhivar Sourati, Vishnu Priya Prasanna Venkatesh, Darshan Deshpande, et al.
|
704 |
+
- **arXiv id:** [2212.07425v3](http://arxiv.org/abs/2212.07425v3) **Published Date:** 2022-12-12
|
705 |
+
- **LangChain:**
|
706 |
+
|
707 |
+
- **API Reference:** [langchain_experimental.fallacy_removal](https://python.langchain.com/api_reference/experimental/fallacy_removal.html)
|
708 |
+
|
709 |
+
**Abstract:** The spread of misinformation, propaganda, and flawed argumentation has been
|
710 |
+
amplified in the Internet era. Given the volume of data and the subtlety of
|
711 |
+
identifying violations of argumentation norms, supporting information analytics
|
712 |
+
tasks, like content moderation, with trustworthy methods that can identify
|
713 |
+
logical fallacies is essential. In this paper, we formalize prior theoretical
|
714 |
+
work on logical fallacies into a comprehensive three-stage evaluation framework
|
715 |
+
of detection, coarse-grained, and fine-grained classification. We adapt
|
716 |
+
existing evaluation datasets for each stage of the evaluation. We employ three
|
717 |
+
families of robust and explainable methods based on prototype reasoning,
|
718 |
+
instance-based reasoning, and knowledge injection. The methods combine language
|
719 |
+
models with background knowledge and explainable mechanisms. Moreover, we
|
720 |
+
address data sparsity with strategies for data augmentation and curriculum
|
721 |
+
learning. Our three-stage framework natively consolidates prior datasets and
|
722 |
+
methods from existing tasks, like propaganda detection, serving as an
|
723 |
+
overarching evaluation testbed. We extensively evaluate these methods on our
|
724 |
+
datasets, focusing on their robustness and explainability. Our results provide
|
725 |
+
insight into the strengths and weaknesses of the methods on different
|
726 |
+
components and fallacy classes, indicating that fallacy identification is a
|
727 |
+
challenging task that may require specialized forms of reasoning to capture
|
728 |
+
various classes. We share our open-source code and data on GitHub to support
|
729 |
+
further work on logical fallacy identification.
|
730 |
+
|
731 |
+
## Complementary Explanations for Effective In-Context Learning
|
732 |
+
|
733 |
+
- **Authors:** Xi Ye, Srinivasan Iyer, Asli Celikyilmaz, et al.
|
734 |
+
- **arXiv id:** [2211.13892v2](http://arxiv.org/abs/2211.13892v2) **Published Date:** 2022-11-25
|
735 |
+
- **LangChain:**
|
736 |
+
|
737 |
+
- **API Reference:** [langchain_core...MaxMarginalRelevanceExampleSelector](https://api.python.langchain.com/en/latest/example_selectors/langchain_core.example_selectors.semantic_similarity.MaxMarginalRelevanceExampleSelector.html#langchain_core.example_selectors.semantic_similarity.MaxMarginalRelevanceExampleSelector)
|
738 |
+
|
739 |
+
**Abstract:** Large language models (LLMs) have exhibited remarkable capabilities in
|
740 |
+
learning from explanations in prompts, but there has been limited understanding
|
741 |
+
of exactly how these explanations function or why they are effective. This work
|
742 |
+
aims to better understand the mechanisms by which explanations are used for
|
743 |
+
in-context learning. We first study the impact of two different factors on the
|
744 |
+
performance of prompts with explanations: the computation trace (the way the
|
745 |
+
solution is decomposed) and the natural language used to express the prompt. By
|
746 |
+
perturbing explanations on three controlled tasks, we show that both factors
|
747 |
+
contribute to the effectiveness of explanations. We further study how to form
|
748 |
+
maximally effective sets of explanations for solving a given test query. We
|
749 |
+
find that LLMs can benefit from the complementarity of the explanation set:
|
750 |
+
diverse reasoning skills shown by different exemplars can lead to better
|
751 |
+
performance. Therefore, we propose a maximal marginal relevance-based exemplar
|
752 |
+
selection approach for constructing exemplar sets that are both relevant as
|
753 |
+
well as complementary, which successfully improves the in-context learning
|
754 |
+
performance across three real-world tasks on multiple LLMs.
|
755 |
+
|
756 |
+
## PAL: Program-aided Language Models
|
757 |
+
|
758 |
+
- **Authors:** Luyu Gao, Aman Madaan, Shuyan Zhou, et al.
|
759 |
+
- **arXiv id:** [2211.10435v2](http://arxiv.org/abs/2211.10435v2) **Published Date:** 2022-11-18
|
760 |
+
- **LangChain:**
|
761 |
+
|
762 |
+
- **API Reference:** [langchain_experimental.pal_chain](https://python.langchain.com/api_reference/experimental/pal_chain.html), [langchain_experimental...PALChain](https://api.python.langchain.com/en/latest/pal_chain/langchain_experimental.pal_chain.base.PALChain.html#langchain_experimental.pal_chain.base.PALChain)
|
763 |
+
- **Cookbook:** [program_aided_language_model](https://github.com/langchain-ai/langchain/blob/master/cookbook/program_aided_language_model.ipynb)
|
764 |
+
|
765 |
+
**Abstract:** Large language models (LLMs) have recently demonstrated an impressive ability
|
766 |
+
to perform arithmetic and symbolic reasoning tasks, when provided with a few
|
767 |
+
examples at test time ("few-shot prompting"). Much of this success can be
|
768 |
+
attributed to prompting methods such as "chain-of-thought'', which employ LLMs
|
769 |
+
for both understanding the problem description by decomposing it into steps, as
|
770 |
+
well as solving each step of the problem. While LLMs seem to be adept at this
|
771 |
+
sort of step-by-step decomposition, LLMs often make logical and arithmetic
|
772 |
+
mistakes in the solution part, even when the problem is decomposed correctly.
|
773 |
+
In this paper, we present Program-Aided Language models (PAL): a novel approach
|
774 |
+
that uses the LLM to read natural language problems and generate programs as
|
775 |
+
the intermediate reasoning steps, but offloads the solution step to a runtime
|
776 |
+
such as a Python interpreter. With PAL, decomposing the natural language
|
777 |
+
problem into runnable steps remains the only learning task for the LLM, while
|
778 |
+
solving is delegated to the interpreter. We demonstrate this synergy between a
|
779 |
+
neural LLM and a symbolic interpreter across 13 mathematical, symbolic, and
|
780 |
+
algorithmic reasoning tasks from BIG-Bench Hard and other benchmarks. In all
|
781 |
+
these natural language reasoning tasks, generating code using an LLM and
|
782 |
+
reasoning using a Python interpreter leads to more accurate results than much
|
783 |
+
larger models. For example, PAL using Codex achieves state-of-the-art few-shot
|
784 |
+
accuracy on the GSM8K benchmark of math word problems, surpassing PaLM-540B
|
785 |
+
which uses chain-of-thought by absolute 15% top-1. Our code and data are
|
786 |
+
publicly available at http://reasonwithpal.com/ .
|
787 |
+
|
788 |
+
## An Analysis of Fusion Functions for Hybrid Retrieval
|
789 |
+
|
790 |
+
- **Authors:** Sebastian Bruch, Siyu Gai, Amir Ingber
|
791 |
+
- **arXiv id:** [2210.11934v2](http://arxiv.org/abs/2210.11934v2) **Published Date:** 2022-10-21
|
792 |
+
- **LangChain:**
|
793 |
+
|
794 |
+
- **Documentation:** [docs/concepts](https://python.langchain.com/docs/concepts)
|
795 |
+
|
796 |
+
**Abstract:** We study hybrid search in text retrieval where lexical and semantic search
|
797 |
+
are fused together with the intuition that the two are complementary in how
|
798 |
+
they model relevance. In particular, we examine fusion by a convex combination
|
799 |
+
(CC) of lexical and semantic scores, as well as the Reciprocal Rank Fusion
|
800 |
+
(RRF) method, and identify their advantages and potential pitfalls. Contrary to
|
801 |
+
existing studies, we find RRF to be sensitive to its parameters; that the
|
802 |
+
learning of a CC fusion is generally agnostic to the choice of score
|
803 |
+
normalization; that CC outperforms RRF in in-domain and out-of-domain settings;
|
804 |
+
and finally, that CC is sample efficient, requiring only a small set of
|
805 |
+
training examples to tune its only parameter to a target domain.
|
806 |
+
|
807 |
+
## ReAct: Synergizing Reasoning and Acting in Language Models
|
808 |
+
|
809 |
+
- **Authors:** Shunyu Yao, Jeffrey Zhao, Dian Yu, et al.
|
810 |
+
- **arXiv id:** [2210.03629v3](http://arxiv.org/abs/2210.03629v3) **Published Date:** 2022-10-06
|
811 |
+
- **LangChain:**
|
812 |
+
|
813 |
+
- **Documentation:** [docs/integrations/tools/ionic_shopping](https://python.langchain.com/docs/integrations/tools/ionic_shopping), [docs/integrations/providers/cohere](https://python.langchain.com/docs/integrations/providers/cohere), [docs/concepts](https://python.langchain.com/docs/concepts)
|
814 |
+
- **API Reference:** [langchain...create_react_agent](https://api.python.langchain.com/en/latest/agents/langchain.agents.react.agent.create_react_agent.html#langchain.agents.react.agent.create_react_agent), [langchain...TrajectoryEvalChain](https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.agents.trajectory_eval_chain.TrajectoryEvalChain.html#langchain.evaluation.agents.trajectory_eval_chain.TrajectoryEvalChain)
|
815 |
+
|
816 |
+
**Abstract:** While large language models (LLMs) have demonstrated impressive capabilities
|
817 |
+
across tasks in language understanding and interactive decision making, their
|
818 |
+
abilities for reasoning (e.g. chain-of-thought prompting) and acting (e.g.
|
819 |
+
action plan generation) have primarily been studied as separate topics. In this
|
820 |
+
paper, we explore the use of LLMs to generate both reasoning traces and
|
821 |
+
task-specific actions in an interleaved manner, allowing for greater synergy
|
822 |
+
between the two: reasoning traces help the model induce, track, and update
|
823 |
+
action plans as well as handle exceptions, while actions allow it to interface
|
824 |
+
with external sources, such as knowledge bases or environments, to gather
|
825 |
+
additional information. We apply our approach, named ReAct, to a diverse set of
|
826 |
+
language and decision making tasks and demonstrate its effectiveness over
|
827 |
+
state-of-the-art baselines, as well as improved human interpretability and
|
828 |
+
trustworthiness over methods without reasoning or acting components.
|
829 |
+
Concretely, on question answering (HotpotQA) and fact verification (Fever),
|
830 |
+
ReAct overcomes issues of hallucination and error propagation prevalent in
|
831 |
+
chain-of-thought reasoning by interacting with a simple Wikipedia API, and
|
832 |
+
generates human-like task-solving trajectories that are more interpretable than
|
833 |
+
baselines without reasoning traces. On two interactive decision making
|
834 |
+
benchmarks (ALFWorld and WebShop), ReAct outperforms imitation and
|
835 |
+
reinforcement learning methods by an absolute success rate of 34% and 10%
|
836 |
+
respectively, while being prompted with only one or two in-context examples.
|
837 |
+
Project site with code: https://react-lm.github.io
|
838 |
+
|
839 |
+
## Deep Lake: a Lakehouse for Deep Learning
|
840 |
+
|
841 |
+
- **Authors:** Sasun Hambardzumyan, Abhinav Tuli, Levon Ghukasyan, et al.
|
842 |
+
- **arXiv id:** [2209.10785v2](http://arxiv.org/abs/2209.10785v2) **Published Date:** 2022-09-22
|
843 |
+
- **LangChain:**
|
844 |
+
|
845 |
+
- **Documentation:** [docs/integrations/providers/activeloop_deeplake](https://python.langchain.com/docs/integrations/providers/activeloop_deeplake)
|
846 |
+
|
847 |
+
**Abstract:** Traditional data lakes provide critical data infrastructure for analytical
|
848 |
+
workloads by enabling time travel, running SQL queries, ingesting data with
|
849 |
+
ACID transactions, and visualizing petabyte-scale datasets on cloud storage.
|
850 |
+
They allow organizations to break down data silos, unlock data-driven
|
851 |
+
decision-making, improve operational efficiency, and reduce costs. However, as
|
852 |
+
deep learning usage increases, traditional data lakes are not well-designed for
|
853 |
+
applications such as natural language processing (NLP), audio processing,
|
854 |
+
computer vision, and applications involving non-tabular datasets. This paper
|
855 |
+
presents Deep Lake, an open-source lakehouse for deep learning applications
|
856 |
+
developed at Activeloop. Deep Lake maintains the benefits of a vanilla data
|
857 |
+
lake with one key difference: it stores complex data, such as images, videos,
|
858 |
+
annotations, as well as tabular data, in the form of tensors and rapidly
|
859 |
+
streams the data over the network to (a) Tensor Query Language, (b) in-browser
|
860 |
+
visualization engine, or (c) deep learning frameworks without sacrificing GPU
|
861 |
+
utilization. Datasets stored in Deep Lake can be accessed from PyTorch,
|
862 |
+
TensorFlow, JAX, and integrate with numerous MLOps tools.
|
863 |
+
|
864 |
+
## Matryoshka Representation Learning
|
865 |
+
|
866 |
+
- **Authors:** Aditya Kusupati, Gantavya Bhatt, Aniket Rege, et al.
|
867 |
+
- **arXiv id:** [2205.13147v4](http://arxiv.org/abs/2205.13147v4) **Published Date:** 2022-05-26
|
868 |
+
- **LangChain:**
|
869 |
+
|
870 |
+
- **Documentation:** [docs/integrations/providers/snowflake](https://python.langchain.com/docs/integrations/providers/snowflake)
|
871 |
+
|
872 |
+
**Abstract:** Learned representations are a central component in modern ML systems, serving
|
873 |
+
a multitude of downstream tasks. When training such representations, it is
|
874 |
+
often the case that computational and statistical constraints for each
|
875 |
+
downstream task are unknown. In this context rigid, fixed capacity
|
876 |
+
representations can be either over or under-accommodating to the task at hand.
|
877 |
+
This leads us to ask: can we design a flexible representation that can adapt to
|
878 |
+
multiple downstream tasks with varying computational resources? Our main
|
879 |
+
contribution is Matryoshka Representation Learning (MRL) which encodes
|
880 |
+
information at different granularities and allows a single embedding to adapt
|
881 |
+
to the computational constraints of downstream tasks. MRL minimally modifies
|
882 |
+
existing representation learning pipelines and imposes no additional cost
|
883 |
+
during inference and deployment. MRL learns coarse-to-fine representations that
|
884 |
+
are at least as accurate and rich as independently trained low-dimensional
|
885 |
+
representations. The flexibility within the learned Matryoshka Representations
|
886 |
+
offer: (a) up to 14x smaller embedding size for ImageNet-1K classification at
|
887 |
+
the same level of accuracy; (b) up to 14x real-world speed-ups for large-scale
|
888 |
+
retrieval on ImageNet-1K and 4K; and (c) up to 2% accuracy improvements for
|
889 |
+
long-tail few-shot classification, all while being as robust as the original
|
890 |
+
representations. Finally, we show that MRL extends seamlessly to web-scale
|
891 |
+
datasets (ImageNet, JFT) across various modalities -- vision (ViT, ResNet),
|
892 |
+
vision + language (ALIGN) and language (BERT). MRL code and pretrained models
|
893 |
+
are open-sourced at https://github.com/RAIVNLab/MRL.
|
894 |
+
|
895 |
+
## Bitext Mining Using Distilled Sentence Representations for Low-Resource Languages
|
896 |
+
|
897 |
+
- **Authors:** Kevin Heffernan, Onur Çelebi, Holger Schwenk
|
898 |
+
- **arXiv id:** [2205.12654v1](http://arxiv.org/abs/2205.12654v1) **Published Date:** 2022-05-25
|
899 |
+
- **LangChain:**
|
900 |
+
|
901 |
+
- **API Reference:** [langchain_community...LaserEmbeddings](https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.laser.LaserEmbeddings.html#langchain_community.embeddings.laser.LaserEmbeddings)
|
902 |
+
|
903 |
+
**Abstract:** Scaling multilingual representation learning beyond the hundred most frequent
|
904 |
+
languages is challenging, in particular to cover the long tail of low-resource
|
905 |
+
languages. A promising approach has been to train one-for-all multilingual
|
906 |
+
models capable of cross-lingual transfer, but these models often suffer from
|
907 |
+
insufficient capacity and interference between unrelated languages. Instead, we
|
908 |
+
move away from this approach and focus on training multiple language (family)
|
909 |
+
specific representations, but most prominently enable all languages to still be
|
910 |
+
encoded in the same representational space. To achieve this, we focus on
|
911 |
+
teacher-student training, allowing all encoders to be mutually compatible for
|
912 |
+
bitext mining, and enabling fast learning of new languages. We introduce a new
|
913 |
+
teacher-student training scheme which combines supervised and self-supervised
|
914 |
+
training, allowing encoders to take advantage of monolingual training data,
|
915 |
+
which is valuable in the low-resource setting.
|
916 |
+
Our approach significantly outperforms the original LASER encoder. We study
|
917 |
+
very low-resource languages and handle 50 African languages, many of which are
|
918 |
+
not covered by any other model. For these languages, we train sentence
|
919 |
+
encoders, mine bitexts, and validate the bitexts by training NMT systems.
|
920 |
+
|
921 |
+
## Evaluating the Text-to-SQL Capabilities of Large Language Models
|
922 |
+
|
923 |
+
- **Authors:** Nitarshan Rajkumar, Raymond Li, Dzmitry Bahdanau
|
924 |
+
- **arXiv id:** [2204.00498v1](http://arxiv.org/abs/2204.00498v1) **Published Date:** 2022-03-15
|
925 |
+
- **LangChain:**
|
926 |
+
|
927 |
+
- **Documentation:** [docs/tutorials/sql_qa](https://python.langchain.com/docs/tutorials/sql_qa)
|
928 |
+
- **API Reference:** [langchain_community...SQLDatabase](https://api.python.langchain.com/en/latest/utilities/langchain_community.utilities.sql_database.SQLDatabase.html#langchain_community.utilities.sql_database.SQLDatabase), [langchain_community...SparkSQL](https://api.python.langchain.com/en/latest/utilities/langchain_community.utilities.spark_sql.SparkSQL.html#langchain_community.utilities.spark_sql.SparkSQL)
|
929 |
+
|
930 |
+
**Abstract:** We perform an empirical evaluation of Text-to-SQL capabilities of the Codex
|
931 |
+
language model. We find that, without any finetuning, Codex is a strong
|
932 |
+
baseline on the Spider benchmark; we also analyze the failure modes of Codex in
|
933 |
+
this setting. Furthermore, we demonstrate on the GeoQuery and Scholar
|
934 |
+
benchmarks that a small number of in-domain examples provided in the prompt
|
935 |
+
enables Codex to perform better than state-of-the-art models finetuned on such
|
936 |
+
few-shot examples.
|
937 |
+
|
938 |
+
## Locally Typical Sampling
|
939 |
+
|
940 |
+
- **Authors:** Clara Meister, Tiago Pimentel, Gian Wiher, et al.
|
941 |
+
- **arXiv id:** [2202.00666v5](http://arxiv.org/abs/2202.00666v5) **Published Date:** 2022-02-01
|
942 |
+
- **LangChain:**
|
943 |
+
|
944 |
+
- **API Reference:** [langchain_huggingface...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint), [langchain_community...HuggingFaceTextGenInference](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference.html#langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference), [langchain_community...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint)
|
945 |
+
|
946 |
+
**Abstract:** Today's probabilistic language generators fall short when it comes to
|
947 |
+
producing coherent and fluent text despite the fact that the underlying models
|
948 |
+
perform well under standard metrics, e.g., perplexity. This discrepancy has
|
949 |
+
puzzled the language generation community for the last few years. In this work,
|
950 |
+
we posit that the abstraction of natural language generation as a discrete
|
951 |
+
stochastic process--which allows for an information-theoretic analysis--can
|
952 |
+
provide new insights into the behavior of probabilistic language generators,
|
953 |
+
e.g., why high-probability texts can be dull or repetitive. Humans use language
|
954 |
+
as a means of communicating information, aiming to do so in a simultaneously
|
955 |
+
efficient and error-minimizing manner; in fact, psycholinguistics research
|
956 |
+
suggests humans choose each word in a string with this subconscious goal in
|
957 |
+
mind. We formally define the set of strings that meet this criterion: those for
|
958 |
+
which each word has an information content close to the expected information
|
959 |
+
content, i.e., the conditional entropy of our model. We then propose a simple
|
960 |
+
and efficient procedure for enforcing this criterion when generating from
|
961 |
+
probabilistic models, which we call locally typical sampling. Automatic and
|
962 |
+
human evaluations show that, in comparison to nucleus and top-k sampling,
|
963 |
+
locally typical sampling offers competitive performance (in both abstractive
|
964 |
+
summarization and story generation) in terms of quality while consistently
|
965 |
+
reducing degenerate repetitions.
|
966 |
+
|
967 |
+
## ColBERTv2: Effective and Efficient Retrieval via Lightweight Late Interaction
|
968 |
+
|
969 |
+
- **Authors:** Keshav Santhanam, Omar Khattab, Jon Saad-Falcon, et al.
|
970 |
+
- **arXiv id:** [2112.01488v3](http://arxiv.org/abs/2112.01488v3) **Published Date:** 2021-12-02
|
971 |
+
- **LangChain:**
|
972 |
+
|
973 |
+
- **Documentation:** [docs/integrations/retrievers/ragatouille](https://python.langchain.com/docs/integrations/retrievers/ragatouille), [docs/integrations/providers/ragatouille](https://python.langchain.com/docs/integrations/providers/ragatouille), [docs/concepts](https://python.langchain.com/docs/concepts), [docs/integrations/providers/dspy](https://python.langchain.com/docs/integrations/providers/dspy)
|
974 |
+
|
975 |
+
**Abstract:** Neural information retrieval (IR) has greatly advanced search and other
|
976 |
+
knowledge-intensive language tasks. While many neural IR methods encode queries
|
977 |
+
and documents into single-vector representations, late interaction models
|
978 |
+
produce multi-vector representations at the granularity of each token and
|
979 |
+
decompose relevance modeling into scalable token-level computations. This
|
980 |
+
decomposition has been shown to make late interaction more effective, but it
|
981 |
+
inflates the space footprint of these models by an order of magnitude. In this
|
982 |
+
work, we introduce ColBERTv2, a retriever that couples an aggressive residual
|
983 |
+
compression mechanism with a denoised supervision strategy to simultaneously
|
984 |
+
improve the quality and space footprint of late interaction. We evaluate
|
985 |
+
ColBERTv2 across a wide range of benchmarks, establishing state-of-the-art
|
986 |
+
quality within and outside the training domain while reducing the space
|
987 |
+
footprint of late interaction models by 6--10$\times$.
|
988 |
+
|
989 |
+
## Learning Transferable Visual Models From Natural Language Supervision
|
990 |
+
|
991 |
+
- **Authors:** Alec Radford, Jong Wook Kim, Chris Hallacy, et al.
|
992 |
+
- **arXiv id:** [2103.00020v1](http://arxiv.org/abs/2103.00020v1) **Published Date:** 2021-02-26
|
993 |
+
- **LangChain:**
|
994 |
+
|
995 |
+
- **API Reference:** [langchain_experimental.open_clip](https://python.langchain.com/api_reference/experimental/open_clip.html)
|
996 |
+
|
997 |
+
**Abstract:** State-of-the-art computer vision systems are trained to predict a fixed set
|
998 |
+
of predetermined object categories. This restricted form of supervision limits
|
999 |
+
their generality and usability since additional labeled data is needed to
|
1000 |
+
specify any other visual concept. Learning directly from raw text about images
|
1001 |
+
is a promising alternative which leverages a much broader source of
|
1002 |
+
supervision. We demonstrate that the simple pre-training task of predicting
|
1003 |
+
which caption goes with which image is an efficient and scalable way to learn
|
1004 |
+
SOTA image representations from scratch on a dataset of 400 million (image,
|
1005 |
+
text) pairs collected from the internet. After pre-training, natural language
|
1006 |
+
is used to reference learned visual concepts (or describe new ones) enabling
|
1007 |
+
zero-shot transfer of the model to downstream tasks. We study the performance
|
1008 |
+
of this approach by benchmarking on over 30 different existing computer vision
|
1009 |
+
datasets, spanning tasks such as OCR, action recognition in videos,
|
1010 |
+
geo-localization, and many types of fine-grained object classification. The
|
1011 |
+
model transfers non-trivially to most tasks and is often competitive with a
|
1012 |
+
fully supervised baseline without the need for any dataset specific training.
|
1013 |
+
For instance, we match the accuracy of the original ResNet-50 on ImageNet
|
1014 |
+
zero-shot without needing to use any of the 1.28 million training examples it
|
1015 |
+
was trained on. We release our code and pre-trained model weights at
|
1016 |
+
https://github.com/OpenAI/CLIP.
|
1017 |
+
|
1018 |
+
## Language Models are Few-Shot Learners
|
1019 |
+
|
1020 |
+
- **Authors:** Tom B. Brown, Benjamin Mann, Nick Ryder, et al.
|
1021 |
+
- **arXiv id:** [2005.14165v4](http://arxiv.org/abs/2005.14165v4) **Published Date:** 2020-05-28
|
1022 |
+
- **LangChain:**
|
1023 |
+
|
1024 |
+
- **Documentation:** [docs/concepts](https://python.langchain.com/docs/concepts)
|
1025 |
+
|
1026 |
+
**Abstract:** Recent work has demonstrated substantial gains on many NLP tasks and
|
1027 |
+
benchmarks by pre-training on a large corpus of text followed by fine-tuning on
|
1028 |
+
a specific task. While typically task-agnostic in architecture, this method
|
1029 |
+
still requires task-specific fine-tuning datasets of thousands or tens of
|
1030 |
+
thousands of examples. By contrast, humans can generally perform a new language
|
1031 |
+
task from only a few examples or from simple instructions - something which
|
1032 |
+
current NLP systems still largely struggle to do. Here we show that scaling up
|
1033 |
+
language models greatly improves task-agnostic, few-shot performance, sometimes
|
1034 |
+
even reaching competitiveness with prior state-of-the-art fine-tuning
|
1035 |
+
approaches. Specifically, we train GPT-3, an autoregressive language model with
|
1036 |
+
175 billion parameters, 10x more than any previous non-sparse language model,
|
1037 |
+
and test its performance in the few-shot setting. For all tasks, GPT-3 is
|
1038 |
+
applied without any gradient updates or fine-tuning, with tasks and few-shot
|
1039 |
+
demonstrations specified purely via text interaction with the model. GPT-3
|
1040 |
+
achieves strong performance on many NLP datasets, including translation,
|
1041 |
+
question-answering, and cloze tasks, as well as several tasks that require
|
1042 |
+
on-the-fly reasoning or domain adaptation, such as unscrambling words, using a
|
1043 |
+
novel word in a sentence, or performing 3-digit arithmetic. At the same time,
|
1044 |
+
we also identify some datasets where GPT-3's few-shot learning still struggles,
|
1045 |
+
as well as some datasets where GPT-3 faces methodological issues related to
|
1046 |
+
training on large web corpora. Finally, we find that GPT-3 can generate samples
|
1047 |
+
of news articles which human evaluators have difficulty distinguishing from
|
1048 |
+
articles written by humans. We discuss broader societal impacts of this finding
|
1049 |
+
and of GPT-3 in general.
|
1050 |
+
|
1051 |
+
## Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks
|
1052 |
+
|
1053 |
+
- **Authors:** Patrick Lewis, Ethan Perez, Aleksandra Piktus, et al.
|
1054 |
+
- **arXiv id:** [2005.11401v4](http://arxiv.org/abs/2005.11401v4) **Published Date:** 2020-05-22
|
1055 |
+
- **LangChain:**
|
1056 |
+
|
1057 |
+
- **Documentation:** [docs/concepts](https://python.langchain.com/docs/concepts)
|
1058 |
+
|
1059 |
+
**Abstract:** Large pre-trained language models have been shown to store factual knowledge
|
1060 |
+
in their parameters, and achieve state-of-the-art results when fine-tuned on
|
1061 |
+
downstream NLP tasks. However, their ability to access and precisely manipulate
|
1062 |
+
knowledge is still limited, and hence on knowledge-intensive tasks, their
|
1063 |
+
performance lags behind task-specific architectures. Additionally, providing
|
1064 |
+
provenance for their decisions and updating their world knowledge remain open
|
1065 |
+
research problems. Pre-trained models with a differentiable access mechanism to
|
1066 |
+
explicit non-parametric memory can overcome this issue, but have so far been
|
1067 |
+
only investigated for extractive downstream tasks. We explore a general-purpose
|
1068 |
+
fine-tuning recipe for retrieval-augmented generation (RAG) -- models which
|
1069 |
+
combine pre-trained parametric and non-parametric memory for language
|
1070 |
+
generation. We introduce RAG models where the parametric memory is a
|
1071 |
+
pre-trained seq2seq model and the non-parametric memory is a dense vector index
|
1072 |
+
of Wikipedia, accessed with a pre-trained neural retriever. We compare two RAG
|
1073 |
+
formulations, one which conditions on the same retrieved passages across the
|
1074 |
+
whole generated sequence, the other can use different passages per token. We
|
1075 |
+
fine-tune and evaluate our models on a wide range of knowledge-intensive NLP
|
1076 |
+
tasks and set the state-of-the-art on three open domain QA tasks, outperforming
|
1077 |
+
parametric seq2seq models and task-specific retrieve-and-extract architectures.
|
1078 |
+
For language generation tasks, we find that RAG models generate more specific,
|
1079 |
+
diverse and factual language than a state-of-the-art parametric-only seq2seq
|
1080 |
+
baseline.
|
1081 |
+
|
1082 |
+
## CTRL: A Conditional Transformer Language Model for Controllable Generation
|
1083 |
+
|
1084 |
+
- **Authors:** Nitish Shirish Keskar, Bryan McCann, Lav R. Varshney, et al.
|
1085 |
+
- **arXiv id:** [1909.05858v2](http://arxiv.org/abs/1909.05858v2) **Published Date:** 2019-09-11
|
1086 |
+
- **LangChain:**
|
1087 |
+
|
1088 |
+
- **API Reference:** [langchain_huggingface...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint), [langchain_community...HuggingFaceTextGenInference](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference.html#langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference), [langchain_community...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint)
|
1089 |
+
|
1090 |
+
**Abstract:** Large-scale language models show promising text generation capabilities, but
|
1091 |
+
users cannot easily control particular aspects of the generated text. We
|
1092 |
+
release CTRL, a 1.63 billion-parameter conditional transformer language model,
|
1093 |
+
trained to condition on control codes that govern style, content, and
|
1094 |
+
task-specific behavior. Control codes were derived from structure that
|
1095 |
+
naturally co-occurs with raw text, preserving the advantages of unsupervised
|
1096 |
+
learning while providing more explicit control over text generation. These
|
1097 |
+
codes also allow CTRL to predict which parts of the training data are most
|
1098 |
+
likely given a sequence. This provides a potential method for analyzing large
|
1099 |
+
amounts of data via model-based source attribution. We have released multiple
|
1100 |
+
full-sized, pretrained versions of CTRL at https://github.com/salesforce/ctrl.
|
1101 |
+
|
langchain_md_files/additional_resources/dependents.mdx
ADDED
@@ -0,0 +1,554 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Dependents
|
2 |
+
|
3 |
+
Dependents stats for `langchain-ai/langchain`
|
4 |
+
|
5 |
+
[](https://github.com/langchain-ai/langchain/network/dependents)
|
6 |
+
[&message=538&color=informational&logo=slickpic)](https://github.com/langchain-ai/langchain/network/dependents)
|
7 |
+
[&message=41179&color=informational&logo=slickpic)](https://github.com/langchain-ai/langchain/network/dependents)
|
8 |
+
|
9 |
+
|
10 |
+
[update: `2023-12-08`; only dependent repositories with Stars > 100]
|
11 |
+
|
12 |
+
|
13 |
+
| Repository | Stars |
|
14 |
+
| :-------- | -----: |
|
15 |
+
|[AntonOsika/gpt-engineer](https://github.com/AntonOsika/gpt-engineer) | 46514 |
|
16 |
+
|[imartinez/privateGPT](https://github.com/imartinez/privateGPT) | 44439 |
|
17 |
+
|[LAION-AI/Open-Assistant](https://github.com/LAION-AI/Open-Assistant) | 35906 |
|
18 |
+
|[hpcaitech/ColossalAI](https://github.com/hpcaitech/ColossalAI) | 35528 |
|
19 |
+
|[moymix/TaskMatrix](https://github.com/moymix/TaskMatrix) | 34342 |
|
20 |
+
|[geekan/MetaGPT](https://github.com/geekan/MetaGPT) | 31126 |
|
21 |
+
|[streamlit/streamlit](https://github.com/streamlit/streamlit) | 28911 |
|
22 |
+
|[reworkd/AgentGPT](https://github.com/reworkd/AgentGPT) | 27833 |
|
23 |
+
|[StanGirard/quivr](https://github.com/StanGirard/quivr) | 26032 |
|
24 |
+
|[OpenBB-finance/OpenBBTerminal](https://github.com/OpenBB-finance/OpenBBTerminal) | 24946 |
|
25 |
+
|[run-llama/llama_index](https://github.com/run-llama/llama_index) | 24859 |
|
26 |
+
|[jmorganca/ollama](https://github.com/jmorganca/ollama) | 20849 |
|
27 |
+
|[openai/chatgpt-retrieval-plugin](https://github.com/openai/chatgpt-retrieval-plugin) | 20249 |
|
28 |
+
|[chatchat-space/Langchain-Chatchat](https://github.com/chatchat-space/Langchain-Chatchat) | 19305 |
|
29 |
+
|[mindsdb/mindsdb](https://github.com/mindsdb/mindsdb) | 19172 |
|
30 |
+
|[PromtEngineer/localGPT](https://github.com/PromtEngineer/localGPT) | 17528 |
|
31 |
+
|[cube-js/cube](https://github.com/cube-js/cube) | 16575 |
|
32 |
+
|[mlflow/mlflow](https://github.com/mlflow/mlflow) | 16000 |
|
33 |
+
|[mudler/LocalAI](https://github.com/mudler/LocalAI) | 14067 |
|
34 |
+
|[logspace-ai/langflow](https://github.com/logspace-ai/langflow) | 13679 |
|
35 |
+
|[GaiZhenbiao/ChuanhuChatGPT](https://github.com/GaiZhenbiao/ChuanhuChatGPT) | 13648 |
|
36 |
+
|[arc53/DocsGPT](https://github.com/arc53/DocsGPT) | 13423 |
|
37 |
+
|[openai/evals](https://github.com/openai/evals) | 12649 |
|
38 |
+
|[airbytehq/airbyte](https://github.com/airbytehq/airbyte) | 12460 |
|
39 |
+
|[langgenius/dify](https://github.com/langgenius/dify) | 11859 |
|
40 |
+
|[databrickslabs/dolly](https://github.com/databrickslabs/dolly) | 10672 |
|
41 |
+
|[AIGC-Audio/AudioGPT](https://github.com/AIGC-Audio/AudioGPT) | 9437 |
|
42 |
+
|[langchain-ai/langchainjs](https://github.com/langchain-ai/langchainjs) | 9227 |
|
43 |
+
|[gventuri/pandas-ai](https://github.com/gventuri/pandas-ai) | 9203 |
|
44 |
+
|[aws/amazon-sagemaker-examples](https://github.com/aws/amazon-sagemaker-examples) | 9079 |
|
45 |
+
|[h2oai/h2ogpt](https://github.com/h2oai/h2ogpt) | 8945 |
|
46 |
+
|[PipedreamHQ/pipedream](https://github.com/PipedreamHQ/pipedream) | 7550 |
|
47 |
+
|[bentoml/OpenLLM](https://github.com/bentoml/OpenLLM) | 6957 |
|
48 |
+
|[THUDM/ChatGLM3](https://github.com/THUDM/ChatGLM3) | 6801 |
|
49 |
+
|[microsoft/promptflow](https://github.com/microsoft/promptflow) | 6776 |
|
50 |
+
|[cpacker/MemGPT](https://github.com/cpacker/MemGPT) | 6642 |
|
51 |
+
|[joshpxyne/gpt-migrate](https://github.com/joshpxyne/gpt-migrate) | 6482 |
|
52 |
+
|[zauberzeug/nicegui](https://github.com/zauberzeug/nicegui) | 6037 |
|
53 |
+
|[embedchain/embedchain](https://github.com/embedchain/embedchain) | 6023 |
|
54 |
+
|[mage-ai/mage-ai](https://github.com/mage-ai/mage-ai) | 6019 |
|
55 |
+
|[assafelovic/gpt-researcher](https://github.com/assafelovic/gpt-researcher) | 5936 |
|
56 |
+
|[sweepai/sweep](https://github.com/sweepai/sweep) | 5855 |
|
57 |
+
|[wenda-LLM/wenda](https://github.com/wenda-LLM/wenda) | 5766 |
|
58 |
+
|[zilliztech/GPTCache](https://github.com/zilliztech/GPTCache) | 5710 |
|
59 |
+
|[pdm-project/pdm](https://github.com/pdm-project/pdm) | 5665 |
|
60 |
+
|[GreyDGL/PentestGPT](https://github.com/GreyDGL/PentestGPT) | 5568 |
|
61 |
+
|[gkamradt/langchain-tutorials](https://github.com/gkamradt/langchain-tutorials) | 5507 |
|
62 |
+
|[Shaunwei/RealChar](https://github.com/Shaunwei/RealChar) | 5501 |
|
63 |
+
|[facebookresearch/llama-recipes](https://github.com/facebookresearch/llama-recipes) | 5477 |
|
64 |
+
|[serge-chat/serge](https://github.com/serge-chat/serge) | 5221 |
|
65 |
+
|[run-llama/rags](https://github.com/run-llama/rags) | 4916 |
|
66 |
+
|[openchatai/OpenChat](https://github.com/openchatai/OpenChat) | 4870 |
|
67 |
+
|[danswer-ai/danswer](https://github.com/danswer-ai/danswer) | 4774 |
|
68 |
+
|[langchain-ai/opengpts](https://github.com/langchain-ai/opengpts) | 4709 |
|
69 |
+
|[postgresml/postgresml](https://github.com/postgresml/postgresml) | 4639 |
|
70 |
+
|[MineDojo/Voyager](https://github.com/MineDojo/Voyager) | 4582 |
|
71 |
+
|[intel-analytics/BigDL](https://github.com/intel-analytics/BigDL) | 4581 |
|
72 |
+
|[yihong0618/xiaogpt](https://github.com/yihong0618/xiaogpt) | 4359 |
|
73 |
+
|[RayVentura/ShortGPT](https://github.com/RayVentura/ShortGPT) | 4357 |
|
74 |
+
|[Azure-Samples/azure-search-openai-demo](https://github.com/Azure-Samples/azure-search-openai-demo) | 4317 |
|
75 |
+
|[madawei2699/myGPTReader](https://github.com/madawei2699/myGPTReader) | 4289 |
|
76 |
+
|[apache/nifi](https://github.com/apache/nifi) | 4098 |
|
77 |
+
|[langchain-ai/chat-langchain](https://github.com/langchain-ai/chat-langchain) | 4091 |
|
78 |
+
|[aiwaves-cn/agents](https://github.com/aiwaves-cn/agents) | 4073 |
|
79 |
+
|[krishnaik06/The-Grand-Complete-Data-Science-Materials](https://github.com/krishnaik06/The-Grand-Complete-Data-Science-Materials) | 4065 |
|
80 |
+
|[khoj-ai/khoj](https://github.com/khoj-ai/khoj) | 4016 |
|
81 |
+
|[Azure/azure-sdk-for-python](https://github.com/Azure/azure-sdk-for-python) | 3941 |
|
82 |
+
|[PrefectHQ/marvin](https://github.com/PrefectHQ/marvin) | 3915 |
|
83 |
+
|[OpenBMB/ToolBench](https://github.com/OpenBMB/ToolBench) | 3799 |
|
84 |
+
|[marqo-ai/marqo](https://github.com/marqo-ai/marqo) | 3771 |
|
85 |
+
|[kyegomez/tree-of-thoughts](https://github.com/kyegomez/tree-of-thoughts) | 3688 |
|
86 |
+
|[Unstructured-IO/unstructured](https://github.com/Unstructured-IO/unstructured) | 3543 |
|
87 |
+
|[llm-workflow-engine/llm-workflow-engine](https://github.com/llm-workflow-engine/llm-workflow-engine) | 3515 |
|
88 |
+
|[shroominic/codeinterpreter-api](https://github.com/shroominic/codeinterpreter-api) | 3425 |
|
89 |
+
|[openchatai/OpenCopilot](https://github.com/openchatai/OpenCopilot) | 3418 |
|
90 |
+
|[josStorer/RWKV-Runner](https://github.com/josStorer/RWKV-Runner) | 3297 |
|
91 |
+
|[whitead/paper-qa](https://github.com/whitead/paper-qa) | 3280 |
|
92 |
+
|[homanp/superagent](https://github.com/homanp/superagent) | 3258 |
|
93 |
+
|[ParisNeo/lollms-webui](https://github.com/ParisNeo/lollms-webui) | 3199 |
|
94 |
+
|[OpenBMB/AgentVerse](https://github.com/OpenBMB/AgentVerse) | 3099 |
|
95 |
+
|[project-baize/baize-chatbot](https://github.com/project-baize/baize-chatbot) | 3090 |
|
96 |
+
|[OpenGVLab/InternGPT](https://github.com/OpenGVLab/InternGPT) | 2989 |
|
97 |
+
|[xlang-ai/OpenAgents](https://github.com/xlang-ai/OpenAgents) | 2825 |
|
98 |
+
|[dataelement/bisheng](https://github.com/dataelement/bisheng) | 2797 |
|
99 |
+
|[Mintplex-Labs/anything-llm](https://github.com/Mintplex-Labs/anything-llm) | 2784 |
|
100 |
+
|[OpenBMB/BMTools](https://github.com/OpenBMB/BMTools) | 2734 |
|
101 |
+
|[run-llama/llama-hub](https://github.com/run-llama/llama-hub) | 2721 |
|
102 |
+
|[SamurAIGPT/EmbedAI](https://github.com/SamurAIGPT/EmbedAI) | 2647 |
|
103 |
+
|[NVIDIA/NeMo-Guardrails](https://github.com/NVIDIA/NeMo-Guardrails) | 2637 |
|
104 |
+
|[X-D-Lab/LangChain-ChatGLM-Webui](https://github.com/X-D-Lab/LangChain-ChatGLM-Webui) | 2532 |
|
105 |
+
|[GerevAI/gerev](https://github.com/GerevAI/gerev) | 2517 |
|
106 |
+
|[keephq/keep](https://github.com/keephq/keep) | 2448 |
|
107 |
+
|[yanqiangmiffy/Chinese-LangChain](https://github.com/yanqiangmiffy/Chinese-LangChain) | 2397 |
|
108 |
+
|[OpenGVLab/Ask-Anything](https://github.com/OpenGVLab/Ask-Anything) | 2324 |
|
109 |
+
|[IntelligenzaArtificiale/Free-Auto-GPT](https://github.com/IntelligenzaArtificiale/Free-Auto-GPT) | 2241 |
|
110 |
+
|[YiVal/YiVal](https://github.com/YiVal/YiVal) | 2232 |
|
111 |
+
|[jupyterlab/jupyter-ai](https://github.com/jupyterlab/jupyter-ai) | 2189 |
|
112 |
+
|[Farama-Foundation/PettingZoo](https://github.com/Farama-Foundation/PettingZoo) | 2136 |
|
113 |
+
|[microsoft/TaskWeaver](https://github.com/microsoft/TaskWeaver) | 2126 |
|
114 |
+
|[hwchase17/notion-qa](https://github.com/hwchase17/notion-qa) | 2083 |
|
115 |
+
|[FlagOpen/FlagEmbedding](https://github.com/FlagOpen/FlagEmbedding) | 2053 |
|
116 |
+
|[paulpierre/RasaGPT](https://github.com/paulpierre/RasaGPT) | 1999 |
|
117 |
+
|[hegelai/prompttools](https://github.com/hegelai/prompttools) | 1984 |
|
118 |
+
|[mckinsey/vizro](https://github.com/mckinsey/vizro) | 1951 |
|
119 |
+
|[vocodedev/vocode-python](https://github.com/vocodedev/vocode-python) | 1868 |
|
120 |
+
|[dot-agent/openAMS](https://github.com/dot-agent/openAMS) | 1796 |
|
121 |
+
|[explodinggradients/ragas](https://github.com/explodinggradients/ragas) | 1766 |
|
122 |
+
|[AI-Citizen/SolidGPT](https://github.com/AI-Citizen/SolidGPT) | 1761 |
|
123 |
+
|[Kav-K/GPTDiscord](https://github.com/Kav-K/GPTDiscord) | 1696 |
|
124 |
+
|[run-llama/sec-insights](https://github.com/run-llama/sec-insights) | 1654 |
|
125 |
+
|[avinashkranjan/Amazing-Python-Scripts](https://github.com/avinashkranjan/Amazing-Python-Scripts) | 1635 |
|
126 |
+
|[microsoft/WhatTheHack](https://github.com/microsoft/WhatTheHack) | 1629 |
|
127 |
+
|[noahshinn/reflexion](https://github.com/noahshinn/reflexion) | 1625 |
|
128 |
+
|[psychic-api/psychic](https://github.com/psychic-api/psychic) | 1618 |
|
129 |
+
|[Forethought-Technologies/AutoChain](https://github.com/Forethought-Technologies/AutoChain) | 1611 |
|
130 |
+
|[pinterest/querybook](https://github.com/pinterest/querybook) | 1586 |
|
131 |
+
|[refuel-ai/autolabel](https://github.com/refuel-ai/autolabel) | 1553 |
|
132 |
+
|[jina-ai/langchain-serve](https://github.com/jina-ai/langchain-serve) | 1537 |
|
133 |
+
|[jina-ai/dev-gpt](https://github.com/jina-ai/dev-gpt) | 1522 |
|
134 |
+
|[agiresearch/OpenAGI](https://github.com/agiresearch/OpenAGI) | 1493 |
|
135 |
+
|[ttengwang/Caption-Anything](https://github.com/ttengwang/Caption-Anything) | 1484 |
|
136 |
+
|[greshake/llm-security](https://github.com/greshake/llm-security) | 1483 |
|
137 |
+
|[promptfoo/promptfoo](https://github.com/promptfoo/promptfoo) | 1480 |
|
138 |
+
|[milvus-io/bootcamp](https://github.com/milvus-io/bootcamp) | 1477 |
|
139 |
+
|[richardyc/Chrome-GPT](https://github.com/richardyc/Chrome-GPT) | 1475 |
|
140 |
+
|[melih-unsal/DemoGPT](https://github.com/melih-unsal/DemoGPT) | 1428 |
|
141 |
+
|[YORG-AI/Open-Assistant](https://github.com/YORG-AI/Open-Assistant) | 1419 |
|
142 |
+
|[101dotxyz/GPTeam](https://github.com/101dotxyz/GPTeam) | 1416 |
|
143 |
+
|[jina-ai/thinkgpt](https://github.com/jina-ai/thinkgpt) | 1408 |
|
144 |
+
|[mmz-001/knowledge_gpt](https://github.com/mmz-001/knowledge_gpt) | 1398 |
|
145 |
+
|[intel/intel-extension-for-transformers](https://github.com/intel/intel-extension-for-transformers) | 1387 |
|
146 |
+
|[Azure/azureml-examples](https://github.com/Azure/azureml-examples) | 1385 |
|
147 |
+
|[lunasec-io/lunasec](https://github.com/lunasec-io/lunasec) | 1367 |
|
148 |
+
|[eyurtsev/kor](https://github.com/eyurtsev/kor) | 1355 |
|
149 |
+
|[xusenlinzy/api-for-open-llm](https://github.com/xusenlinzy/api-for-open-llm) | 1325 |
|
150 |
+
|[griptape-ai/griptape](https://github.com/griptape-ai/griptape) | 1323 |
|
151 |
+
|[SuperDuperDB/superduperdb](https://github.com/SuperDuperDB/superduperdb) | 1290 |
|
152 |
+
|[cofactoryai/textbase](https://github.com/cofactoryai/textbase) | 1284 |
|
153 |
+
|[psychic-api/rag-stack](https://github.com/psychic-api/rag-stack) | 1260 |
|
154 |
+
|[filip-michalsky/SalesGPT](https://github.com/filip-michalsky/SalesGPT) | 1250 |
|
155 |
+
|[nod-ai/SHARK](https://github.com/nod-ai/SHARK) | 1237 |
|
156 |
+
|[pluralsh/plural](https://github.com/pluralsh/plural) | 1234 |
|
157 |
+
|[cheshire-cat-ai/core](https://github.com/cheshire-cat-ai/core) | 1194 |
|
158 |
+
|[LC1332/Chat-Haruhi-Suzumiya](https://github.com/LC1332/Chat-Haruhi-Suzumiya) | 1184 |
|
159 |
+
|[poe-platform/server-bot-quick-start](https://github.com/poe-platform/server-bot-quick-start) | 1182 |
|
160 |
+
|[microsoft/X-Decoder](https://github.com/microsoft/X-Decoder) | 1180 |
|
161 |
+
|[juncongmoo/chatllama](https://github.com/juncongmoo/chatllama) | 1171 |
|
162 |
+
|[visual-openllm/visual-openllm](https://github.com/visual-openllm/visual-openllm) | 1156 |
|
163 |
+
|[alejandro-ao/ask-multiple-pdfs](https://github.com/alejandro-ao/ask-multiple-pdfs) | 1153 |
|
164 |
+
|[ThousandBirdsInc/chidori](https://github.com/ThousandBirdsInc/chidori) | 1152 |
|
165 |
+
|[irgolic/AutoPR](https://github.com/irgolic/AutoPR) | 1137 |
|
166 |
+
|[SamurAIGPT/Camel-AutoGPT](https://github.com/SamurAIGPT/Camel-AutoGPT) | 1083 |
|
167 |
+
|[ray-project/llm-applications](https://github.com/ray-project/llm-applications) | 1080 |
|
168 |
+
|[run-llama/llama-lab](https://github.com/run-llama/llama-lab) | 1072 |
|
169 |
+
|[jiran214/GPT-vup](https://github.com/jiran214/GPT-vup) | 1041 |
|
170 |
+
|[MetaGLM/FinGLM](https://github.com/MetaGLM/FinGLM) | 1035 |
|
171 |
+
|[peterw/Chat-with-Github-Repo](https://github.com/peterw/Chat-with-Github-Repo) | 1020 |
|
172 |
+
|[Anil-matcha/ChatPDF](https://github.com/Anil-matcha/ChatPDF) | 991 |
|
173 |
+
|[langchain-ai/langserve](https://github.com/langchain-ai/langserve) | 983 |
|
174 |
+
|[THUDM/AgentTuning](https://github.com/THUDM/AgentTuning) | 976 |
|
175 |
+
|[rlancemartin/auto-evaluator](https://github.com/rlancemartin/auto-evaluator) | 975 |
|
176 |
+
|[codeacme17/examor](https://github.com/codeacme17/examor) | 964 |
|
177 |
+
|[all-in-aigc/gpts-works](https://github.com/all-in-aigc/gpts-works) | 946 |
|
178 |
+
|[Ikaros-521/AI-Vtuber](https://github.com/Ikaros-521/AI-Vtuber) | 946 |
|
179 |
+
|[microsoft/Llama-2-Onnx](https://github.com/microsoft/Llama-2-Onnx) | 898 |
|
180 |
+
|[cirediatpl/FigmaChain](https://github.com/cirediatpl/FigmaChain) | 895 |
|
181 |
+
|[ricklamers/shell-ai](https://github.com/ricklamers/shell-ai) | 893 |
|
182 |
+
|[modelscope/modelscope-agent](https://github.com/modelscope/modelscope-agent) | 893 |
|
183 |
+
|[seanpixel/Teenage-AGI](https://github.com/seanpixel/Teenage-AGI) | 886 |
|
184 |
+
|[ajndkr/lanarky](https://github.com/ajndkr/lanarky) | 880 |
|
185 |
+
|[kennethleungty/Llama-2-Open-Source-LLM-CPU-Inference](https://github.com/kennethleungty/Llama-2-Open-Source-LLM-CPU-Inference) | 872 |
|
186 |
+
|[corca-ai/EVAL](https://github.com/corca-ai/EVAL) | 846 |
|
187 |
+
|[hwchase17/chat-your-data](https://github.com/hwchase17/chat-your-data) | 841 |
|
188 |
+
|[kreneskyp/ix](https://github.com/kreneskyp/ix) | 821 |
|
189 |
+
|[Link-AGI/AutoAgents](https://github.com/Link-AGI/AutoAgents) | 820 |
|
190 |
+
|[truera/trulens](https://github.com/truera/trulens) | 794 |
|
191 |
+
|[Dataherald/dataherald](https://github.com/Dataherald/dataherald) | 788 |
|
192 |
+
|[sunlabuiuc/PyHealth](https://github.com/sunlabuiuc/PyHealth) | 783 |
|
193 |
+
|[jondurbin/airoboros](https://github.com/jondurbin/airoboros) | 783 |
|
194 |
+
|[pyspark-ai/pyspark-ai](https://github.com/pyspark-ai/pyspark-ai) | 782 |
|
195 |
+
|[confident-ai/deepeval](https://github.com/confident-ai/deepeval) | 780 |
|
196 |
+
|[billxbf/ReWOO](https://github.com/billxbf/ReWOO) | 777 |
|
197 |
+
|[langchain-ai/streamlit-agent](https://github.com/langchain-ai/streamlit-agent) | 776 |
|
198 |
+
|[akshata29/entaoai](https://github.com/akshata29/entaoai) | 771 |
|
199 |
+
|[LambdaLabsML/examples](https://github.com/LambdaLabsML/examples) | 770 |
|
200 |
+
|[getmetal/motorhead](https://github.com/getmetal/motorhead) | 768 |
|
201 |
+
|[Dicklesworthstone/swiss_army_llama](https://github.com/Dicklesworthstone/swiss_army_llama) | 757 |
|
202 |
+
|[ruoccofabrizio/azure-open-ai-embeddings-qna](https://github.com/ruoccofabrizio/azure-open-ai-embeddings-qna) | 757 |
|
203 |
+
|[msoedov/langcorn](https://github.com/msoedov/langcorn) | 754 |
|
204 |
+
|[e-johnstonn/BriefGPT](https://github.com/e-johnstonn/BriefGPT) | 753 |
|
205 |
+
|[microsoft/sample-app-aoai-chatGPT](https://github.com/microsoft/sample-app-aoai-chatGPT) | 749 |
|
206 |
+
|[explosion/spacy-llm](https://github.com/explosion/spacy-llm) | 731 |
|
207 |
+
|[MiuLab/Taiwan-LLM](https://github.com/MiuLab/Taiwan-LLM) | 716 |
|
208 |
+
|[whyiyhw/chatgpt-wechat](https://github.com/whyiyhw/chatgpt-wechat) | 702 |
|
209 |
+
|[Azure-Samples/openai](https://github.com/Azure-Samples/openai) | 692 |
|
210 |
+
|[iusztinpaul/hands-on-llms](https://github.com/iusztinpaul/hands-on-llms) | 687 |
|
211 |
+
|[safevideo/autollm](https://github.com/safevideo/autollm) | 682 |
|
212 |
+
|[OpenGenerativeAI/GenossGPT](https://github.com/OpenGenerativeAI/GenossGPT) | 669 |
|
213 |
+
|[NoDataFound/hackGPT](https://github.com/NoDataFound/hackGPT) | 663 |
|
214 |
+
|[AILab-CVC/GPT4Tools](https://github.com/AILab-CVC/GPT4Tools) | 662 |
|
215 |
+
|[langchain-ai/auto-evaluator](https://github.com/langchain-ai/auto-evaluator) | 657 |
|
216 |
+
|[yvann-ba/Robby-chatbot](https://github.com/yvann-ba/Robby-chatbot) | 639 |
|
217 |
+
|[alexanderatallah/window.ai](https://github.com/alexanderatallah/window.ai) | 635 |
|
218 |
+
|[amosjyng/langchain-visualizer](https://github.com/amosjyng/langchain-visualizer) | 630 |
|
219 |
+
|[microsoft/PodcastCopilot](https://github.com/microsoft/PodcastCopilot) | 621 |
|
220 |
+
|[aws-samples/aws-genai-llm-chatbot](https://github.com/aws-samples/aws-genai-llm-chatbot) | 616 |
|
221 |
+
|[NeumTry/NeumAI](https://github.com/NeumTry/NeumAI) | 605 |
|
222 |
+
|[namuan/dr-doc-search](https://github.com/namuan/dr-doc-search) | 599 |
|
223 |
+
|[plastic-labs/tutor-gpt](https://github.com/plastic-labs/tutor-gpt) | 595 |
|
224 |
+
|[marimo-team/marimo](https://github.com/marimo-team/marimo) | 591 |
|
225 |
+
|[yakami129/VirtualWife](https://github.com/yakami129/VirtualWife) | 586 |
|
226 |
+
|[xuwenhao/geektime-ai-course](https://github.com/xuwenhao/geektime-ai-course) | 584 |
|
227 |
+
|[jonra1993/fastapi-alembic-sqlmodel-async](https://github.com/jonra1993/fastapi-alembic-sqlmodel-async) | 573 |
|
228 |
+
|[dgarnitz/vectorflow](https://github.com/dgarnitz/vectorflow) | 568 |
|
229 |
+
|[yeagerai/yeagerai-agent](https://github.com/yeagerai/yeagerai-agent) | 564 |
|
230 |
+
|[daveebbelaar/langchain-experiments](https://github.com/daveebbelaar/langchain-experiments) | 563 |
|
231 |
+
|[traceloop/openllmetry](https://github.com/traceloop/openllmetry) | 559 |
|
232 |
+
|[Agenta-AI/agenta](https://github.com/Agenta-AI/agenta) | 546 |
|
233 |
+
|[michaelthwan/searchGPT](https://github.com/michaelthwan/searchGPT) | 545 |
|
234 |
+
|[jina-ai/agentchain](https://github.com/jina-ai/agentchain) | 544 |
|
235 |
+
|[mckaywrigley/repo-chat](https://github.com/mckaywrigley/repo-chat) | 533 |
|
236 |
+
|[marella/chatdocs](https://github.com/marella/chatdocs) | 532 |
|
237 |
+
|[opentensor/bittensor](https://github.com/opentensor/bittensor) | 532 |
|
238 |
+
|[DjangoPeng/openai-quickstart](https://github.com/DjangoPeng/openai-quickstart) | 527 |
|
239 |
+
|[freddyaboulton/gradio-tools](https://github.com/freddyaboulton/gradio-tools) | 517 |
|
240 |
+
|[sidhq/Multi-GPT](https://github.com/sidhq/Multi-GPT) | 515 |
|
241 |
+
|[alejandro-ao/langchain-ask-pdf](https://github.com/alejandro-ao/langchain-ask-pdf) | 514 |
|
242 |
+
|[sajjadium/ctf-archives](https://github.com/sajjadium/ctf-archives) | 507 |
|
243 |
+
|[continuum-llms/chatgpt-memory](https://github.com/continuum-llms/chatgpt-memory) | 502 |
|
244 |
+
|[steamship-core/steamship-langchain](https://github.com/steamship-core/steamship-langchain) | 494 |
|
245 |
+
|[mpaepper/content-chatbot](https://github.com/mpaepper/content-chatbot) | 493 |
|
246 |
+
|[langchain-ai/langchain-aiplugin](https://github.com/langchain-ai/langchain-aiplugin) | 492 |
|
247 |
+
|[logan-markewich/llama_index_starter_pack](https://github.com/logan-markewich/llama_index_starter_pack) | 483 |
|
248 |
+
|[datawhalechina/llm-universe](https://github.com/datawhalechina/llm-universe) | 475 |
|
249 |
+
|[leondz/garak](https://github.com/leondz/garak) | 464 |
|
250 |
+
|[RedisVentures/ArXivChatGuru](https://github.com/RedisVentures/ArXivChatGuru) | 461 |
|
251 |
+
|[Anil-matcha/Chatbase](https://github.com/Anil-matcha/Chatbase) | 455 |
|
252 |
+
|[Aiyu-awa/luna-ai](https://github.com/Aiyu-awa/luna-ai) | 450 |
|
253 |
+
|[DataDog/dd-trace-py](https://github.com/DataDog/dd-trace-py) | 450 |
|
254 |
+
|[Azure-Samples/miyagi](https://github.com/Azure-Samples/miyagi) | 449 |
|
255 |
+
|[poe-platform/poe-protocol](https://github.com/poe-platform/poe-protocol) | 447 |
|
256 |
+
|[onlyphantom/llm-python](https://github.com/onlyphantom/llm-python) | 446 |
|
257 |
+
|[junruxiong/IncarnaMind](https://github.com/junruxiong/IncarnaMind) | 441 |
|
258 |
+
|[CarperAI/OpenELM](https://github.com/CarperAI/OpenELM) | 441 |
|
259 |
+
|[daodao97/chatdoc](https://github.com/daodao97/chatdoc) | 437 |
|
260 |
+
|[showlab/VLog](https://github.com/showlab/VLog) | 436 |
|
261 |
+
|[wandb/weave](https://github.com/wandb/weave) | 420 |
|
262 |
+
|[QwenLM/Qwen-Agent](https://github.com/QwenLM/Qwen-Agent) | 419 |
|
263 |
+
|[huchenxucs/ChatDB](https://github.com/huchenxucs/ChatDB) | 416 |
|
264 |
+
|[jerlendds/osintbuddy](https://github.com/jerlendds/osintbuddy) | 411 |
|
265 |
+
|[monarch-initiative/ontogpt](https://github.com/monarch-initiative/ontogpt) | 408 |
|
266 |
+
|[mallorbc/Finetune_LLMs](https://github.com/mallorbc/Finetune_LLMs) | 406 |
|
267 |
+
|[JayZeeDesign/researcher-gpt](https://github.com/JayZeeDesign/researcher-gpt) | 405 |
|
268 |
+
|[rsaryev/talk-codebase](https://github.com/rsaryev/talk-codebase) | 401 |
|
269 |
+
|[langchain-ai/langsmith-cookbook](https://github.com/langchain-ai/langsmith-cookbook) | 398 |
|
270 |
+
|[mtenenholtz/chat-twitter](https://github.com/mtenenholtz/chat-twitter) | 398 |
|
271 |
+
|[morpheuslord/GPT_Vuln-analyzer](https://github.com/morpheuslord/GPT_Vuln-analyzer) | 391 |
|
272 |
+
|[MagnivOrg/prompt-layer-library](https://github.com/MagnivOrg/prompt-layer-library) | 387 |
|
273 |
+
|[JohnSnowLabs/langtest](https://github.com/JohnSnowLabs/langtest) | 384 |
|
274 |
+
|[mrwadams/attackgen](https://github.com/mrwadams/attackgen) | 381 |
|
275 |
+
|[codefuse-ai/Test-Agent](https://github.com/codefuse-ai/Test-Agent) | 380 |
|
276 |
+
|[personoids/personoids-lite](https://github.com/personoids/personoids-lite) | 379 |
|
277 |
+
|[mosaicml/examples](https://github.com/mosaicml/examples) | 378 |
|
278 |
+
|[steamship-packages/langchain-production-starter](https://github.com/steamship-packages/langchain-production-starter) | 370 |
|
279 |
+
|[FlagAI-Open/Aquila2](https://github.com/FlagAI-Open/Aquila2) | 365 |
|
280 |
+
|[Mintplex-Labs/vector-admin](https://github.com/Mintplex-Labs/vector-admin) | 365 |
|
281 |
+
|[NimbleBoxAI/ChainFury](https://github.com/NimbleBoxAI/ChainFury) | 357 |
|
282 |
+
|[BlackHC/llm-strategy](https://github.com/BlackHC/llm-strategy) | 354 |
|
283 |
+
|[lilacai/lilac](https://github.com/lilacai/lilac) | 352 |
|
284 |
+
|[preset-io/promptimize](https://github.com/preset-io/promptimize) | 351 |
|
285 |
+
|[yuanjie-ai/ChatLLM](https://github.com/yuanjie-ai/ChatLLM) | 347 |
|
286 |
+
|[andylokandy/gpt-4-search](https://github.com/andylokandy/gpt-4-search) | 346 |
|
287 |
+
|[zhoudaquan/ChatAnything](https://github.com/zhoudaquan/ChatAnything) | 343 |
|
288 |
+
|[rgomezcasas/dotfiles](https://github.com/rgomezcasas/dotfiles) | 343 |
|
289 |
+
|[tigerlab-ai/tiger](https://github.com/tigerlab-ai/tiger) | 342 |
|
290 |
+
|[HumanSignal/label-studio-ml-backend](https://github.com/HumanSignal/label-studio-ml-backend) | 334 |
|
291 |
+
|[nasa-petal/bidara](https://github.com/nasa-petal/bidara) | 334 |
|
292 |
+
|[momegas/megabots](https://github.com/momegas/megabots) | 334 |
|
293 |
+
|[Cheems-Seminar/grounded-segment-any-parts](https://github.com/Cheems-Seminar/grounded-segment-any-parts) | 330 |
|
294 |
+
|[CambioML/pykoi](https://github.com/CambioML/pykoi) | 326 |
|
295 |
+
|[Nuggt-dev/Nuggt](https://github.com/Nuggt-dev/Nuggt) | 326 |
|
296 |
+
|[wandb/edu](https://github.com/wandb/edu) | 326 |
|
297 |
+
|[Haste171/langchain-chatbot](https://github.com/Haste171/langchain-chatbot) | 324 |
|
298 |
+
|[sugarforever/LangChain-Tutorials](https://github.com/sugarforever/LangChain-Tutorials) | 322 |
|
299 |
+
|[liangwq/Chatglm_lora_multi-gpu](https://github.com/liangwq/Chatglm_lora_multi-gpu) | 321 |
|
300 |
+
|[ur-whitelab/chemcrow-public](https://github.com/ur-whitelab/chemcrow-public) | 320 |
|
301 |
+
|[itamargol/openai](https://github.com/itamargol/openai) | 318 |
|
302 |
+
|[gia-guar/JARVIS-ChatGPT](https://github.com/gia-guar/JARVIS-ChatGPT) | 304 |
|
303 |
+
|[SpecterOps/Nemesis](https://github.com/SpecterOps/Nemesis) | 302 |
|
304 |
+
|[facebookresearch/personal-timeline](https://github.com/facebookresearch/personal-timeline) | 302 |
|
305 |
+
|[hnawaz007/pythondataanalysis](https://github.com/hnawaz007/pythondataanalysis) | 301 |
|
306 |
+
|[Chainlit/cookbook](https://github.com/Chainlit/cookbook) | 300 |
|
307 |
+
|[airobotlab/KoChatGPT](https://github.com/airobotlab/KoChatGPT) | 300 |
|
308 |
+
|[GPT-Fathom/GPT-Fathom](https://github.com/GPT-Fathom/GPT-Fathom) | 299 |
|
309 |
+
|[kaarthik108/snowChat](https://github.com/kaarthik108/snowChat) | 299 |
|
310 |
+
|[kyegomez/swarms](https://github.com/kyegomez/swarms) | 296 |
|
311 |
+
|[LangStream/langstream](https://github.com/LangStream/langstream) | 295 |
|
312 |
+
|[genia-dev/GeniA](https://github.com/genia-dev/GeniA) | 294 |
|
313 |
+
|[shamspias/customizable-gpt-chatbot](https://github.com/shamspias/customizable-gpt-chatbot) | 291 |
|
314 |
+
|[TsinghuaDatabaseGroup/DB-GPT](https://github.com/TsinghuaDatabaseGroup/DB-GPT) | 290 |
|
315 |
+
|[conceptofmind/toolformer](https://github.com/conceptofmind/toolformer) | 283 |
|
316 |
+
|[sullivan-sean/chat-langchainjs](https://github.com/sullivan-sean/chat-langchainjs) | 283 |
|
317 |
+
|[AutoPackAI/beebot](https://github.com/AutoPackAI/beebot) | 282 |
|
318 |
+
|[pablomarin/GPT-Azure-Search-Engine](https://github.com/pablomarin/GPT-Azure-Search-Engine) | 282 |
|
319 |
+
|[gkamradt/LLMTest_NeedleInAHaystack](https://github.com/gkamradt/LLMTest_NeedleInAHaystack) | 280 |
|
320 |
+
|[gustavz/DataChad](https://github.com/gustavz/DataChad) | 280 |
|
321 |
+
|[Safiullah-Rahu/CSV-AI](https://github.com/Safiullah-Rahu/CSV-AI) | 278 |
|
322 |
+
|[hwchase17/chroma-langchain](https://github.com/hwchase17/chroma-langchain) | 275 |
|
323 |
+
|[AkshitIreddy/Interactive-LLM-Powered-NPCs](https://github.com/AkshitIreddy/Interactive-LLM-Powered-NPCs) | 268 |
|
324 |
+
|[ennucore/clippinator](https://github.com/ennucore/clippinator) | 267 |
|
325 |
+
|[artitw/text2text](https://github.com/artitw/text2text) | 264 |
|
326 |
+
|[anarchy-ai/LLM-VM](https://github.com/anarchy-ai/LLM-VM) | 263 |
|
327 |
+
|[wpydcr/LLM-Kit](https://github.com/wpydcr/LLM-Kit) | 262 |
|
328 |
+
|[streamlit/llm-examples](https://github.com/streamlit/llm-examples) | 262 |
|
329 |
+
|[paolorechia/learn-langchain](https://github.com/paolorechia/learn-langchain) | 262 |
|
330 |
+
|[yym68686/ChatGPT-Telegram-Bot](https://github.com/yym68686/ChatGPT-Telegram-Bot) | 261 |
|
331 |
+
|[PradipNichite/Youtube-Tutorials](https://github.com/PradipNichite/Youtube-Tutorials) | 259 |
|
332 |
+
|[radi-cho/datasetGPT](https://github.com/radi-cho/datasetGPT) | 259 |
|
333 |
+
|[ur-whitelab/exmol](https://github.com/ur-whitelab/exmol) | 259 |
|
334 |
+
|[ml6team/fondant](https://github.com/ml6team/fondant) | 254 |
|
335 |
+
|[bborn/howdoi.ai](https://github.com/bborn/howdoi.ai) | 254 |
|
336 |
+
|[rahulnyk/knowledge_graph](https://github.com/rahulnyk/knowledge_graph) | 253 |
|
337 |
+
|[recalign/RecAlign](https://github.com/recalign/RecAlign) | 248 |
|
338 |
+
|[hwchase17/langchain-streamlit-template](https://github.com/hwchase17/langchain-streamlit-template) | 248 |
|
339 |
+
|[fetchai/uAgents](https://github.com/fetchai/uAgents) | 247 |
|
340 |
+
|[arthur-ai/bench](https://github.com/arthur-ai/bench) | 247 |
|
341 |
+
|[miaoshouai/miaoshouai-assistant](https://github.com/miaoshouai/miaoshouai-assistant) | 246 |
|
342 |
+
|[RoboCoachTechnologies/GPT-Synthesizer](https://github.com/RoboCoachTechnologies/GPT-Synthesizer) | 244 |
|
343 |
+
|[langchain-ai/web-explorer](https://github.com/langchain-ai/web-explorer) | 242 |
|
344 |
+
|[kaleido-lab/dolphin](https://github.com/kaleido-lab/dolphin) | 242 |
|
345 |
+
|[PJLab-ADG/DriveLikeAHuman](https://github.com/PJLab-ADG/DriveLikeAHuman) | 241 |
|
346 |
+
|[stepanogil/autonomous-hr-chatbot](https://github.com/stepanogil/autonomous-hr-chatbot) | 238 |
|
347 |
+
|[WongSaang/chatgpt-ui-server](https://github.com/WongSaang/chatgpt-ui-server) | 236 |
|
348 |
+
|[nexus-stc/stc](https://github.com/nexus-stc/stc) | 235 |
|
349 |
+
|[yeagerai/genworlds](https://github.com/yeagerai/genworlds) | 235 |
|
350 |
+
|[Gentopia-AI/Gentopia](https://github.com/Gentopia-AI/Gentopia) | 235 |
|
351 |
+
|[alphasecio/langchain-examples](https://github.com/alphasecio/langchain-examples) | 235 |
|
352 |
+
|[grumpyp/aixplora](https://github.com/grumpyp/aixplora) | 232 |
|
353 |
+
|[shaman-ai/agent-actors](https://github.com/shaman-ai/agent-actors) | 232 |
|
354 |
+
|[darrenburns/elia](https://github.com/darrenburns/elia) | 231 |
|
355 |
+
|[orgexyz/BlockAGI](https://github.com/orgexyz/BlockAGI) | 231 |
|
356 |
+
|[handrew/browserpilot](https://github.com/handrew/browserpilot) | 226 |
|
357 |
+
|[su77ungr/CASALIOY](https://github.com/su77ungr/CASALIOY) | 225 |
|
358 |
+
|[nicknochnack/LangchainDocuments](https://github.com/nicknochnack/LangchainDocuments) | 225 |
|
359 |
+
|[dbpunk-labs/octogen](https://github.com/dbpunk-labs/octogen) | 224 |
|
360 |
+
|[langchain-ai/weblangchain](https://github.com/langchain-ai/weblangchain) | 222 |
|
361 |
+
|[CL-lau/SQL-GPT](https://github.com/CL-lau/SQL-GPT) | 222 |
|
362 |
+
|[alvarosevilla95/autolang](https://github.com/alvarosevilla95/autolang) | 221 |
|
363 |
+
|[showlab/UniVTG](https://github.com/showlab/UniVTG) | 220 |
|
364 |
+
|[edreisMD/plugnplai](https://github.com/edreisMD/plugnplai) | 219 |
|
365 |
+
|[hardbyte/qabot](https://github.com/hardbyte/qabot) | 216 |
|
366 |
+
|[microsoft/azure-openai-in-a-day-workshop](https://github.com/microsoft/azure-openai-in-a-day-workshop) | 215 |
|
367 |
+
|[Azure-Samples/chat-with-your-data-solution-accelerator](https://github.com/Azure-Samples/chat-with-your-data-solution-accelerator) | 214 |
|
368 |
+
|[amadad/agentcy](https://github.com/amadad/agentcy) | 213 |
|
369 |
+
|[snexus/llm-search](https://github.com/snexus/llm-search) | 212 |
|
370 |
+
|[afaqueumer/DocQA](https://github.com/afaqueumer/DocQA) | 206 |
|
371 |
+
|[plchld/InsightFlow](https://github.com/plchld/InsightFlow) | 205 |
|
372 |
+
|[yasyf/compress-gpt](https://github.com/yasyf/compress-gpt) | 205 |
|
373 |
+
|[benthecoder/ClassGPT](https://github.com/benthecoder/ClassGPT) | 205 |
|
374 |
+
|[voxel51/voxelgpt](https://github.com/voxel51/voxelgpt) | 204 |
|
375 |
+
|[jbrukh/gpt-jargon](https://github.com/jbrukh/gpt-jargon) | 204 |
|
376 |
+
|[emarco177/ice_breaker](https://github.com/emarco177/ice_breaker) | 204 |
|
377 |
+
|[tencentmusic/supersonic](https://github.com/tencentmusic/supersonic) | 202 |
|
378 |
+
|[Azure-Samples/azure-search-power-skills](https://github.com/Azure-Samples/azure-search-power-skills) | 202 |
|
379 |
+
|[blob42/Instrukt](https://github.com/blob42/Instrukt) | 201 |
|
380 |
+
|[langchain-ai/langsmith-sdk](https://github.com/langchain-ai/langsmith-sdk) | 200 |
|
381 |
+
|[SamPink/dev-gpt](https://github.com/SamPink/dev-gpt) | 200 |
|
382 |
+
|[ju-bezdek/langchain-decorators](https://github.com/ju-bezdek/langchain-decorators) | 198 |
|
383 |
+
|[KMnO4-zx/huanhuan-chat](https://github.com/KMnO4-zx/huanhuan-chat) | 196 |
|
384 |
+
|[Azure-Samples/jp-azureopenai-samples](https://github.com/Azure-Samples/jp-azureopenai-samples) | 192 |
|
385 |
+
|[hongbo-miao/hongbomiao.com](https://github.com/hongbo-miao/hongbomiao.com) | 190 |
|
386 |
+
|[CakeCrusher/openplugin](https://github.com/CakeCrusher/openplugin) | 190 |
|
387 |
+
|[PaddlePaddle/ERNIE-Bot-SDK](https://github.com/PaddlePaddle/ERNIE-Bot-SDK) | 189 |
|
388 |
+
|[retr0reg/Ret2GPT](https://github.com/retr0reg/Ret2GPT) | 189 |
|
389 |
+
|[AmineDiro/cria](https://github.com/AmineDiro/cria) | 187 |
|
390 |
+
|[lancedb/vectordb-recipes](https://github.com/lancedb/vectordb-recipes) | 186 |
|
391 |
+
|[vaibkumr/prompt-optimizer](https://github.com/vaibkumr/prompt-optimizer) | 185 |
|
392 |
+
|[aws-ia/ecs-blueprints](https://github.com/aws-ia/ecs-blueprints) | 184 |
|
393 |
+
|[ethanyanjiali/minChatGPT](https://github.com/ethanyanjiali/minChatGPT) | 183 |
|
394 |
+
|[MuhammadMoinFaisal/LargeLanguageModelsProjects](https://github.com/MuhammadMoinFaisal/LargeLanguageModelsProjects) | 182 |
|
395 |
+
|[shauryr/S2QA](https://github.com/shauryr/S2QA) | 181 |
|
396 |
+
|[summarizepaper/summarizepaper](https://github.com/summarizepaper/summarizepaper) | 180 |
|
397 |
+
|[NomaDamas/RAGchain](https://github.com/NomaDamas/RAGchain) | 179 |
|
398 |
+
|[pnkvalavala/repochat](https://github.com/pnkvalavala/repochat) | 179 |
|
399 |
+
|[ibiscp/LLM-IMDB](https://github.com/ibiscp/LLM-IMDB) | 177 |
|
400 |
+
|[fengyuli-dev/multimedia-gpt](https://github.com/fengyuli-dev/multimedia-gpt) | 177 |
|
401 |
+
|[langchain-ai/text-split-explorer](https://github.com/langchain-ai/text-split-explorer) | 175 |
|
402 |
+
|[iMagist486/ElasticSearch-Langchain-Chatglm2](https://github.com/iMagist486/ElasticSearch-Langchain-Chatglm2) | 175 |
|
403 |
+
|[limaoyi1/Auto-PPT](https://github.com/limaoyi1/Auto-PPT) | 175 |
|
404 |
+
|[Open-Swarm-Net/GPT-Swarm](https://github.com/Open-Swarm-Net/GPT-Swarm) | 175 |
|
405 |
+
|[morpheuslord/HackBot](https://github.com/morpheuslord/HackBot) | 174 |
|
406 |
+
|[v7labs/benchllm](https://github.com/v7labs/benchllm) | 174 |
|
407 |
+
|[Coding-Crashkurse/Langchain-Full-Course](https://github.com/Coding-Crashkurse/Langchain-Full-Course) | 174 |
|
408 |
+
|[dongyh20/Octopus](https://github.com/dongyh20/Octopus) | 173 |
|
409 |
+
|[kimtth/azure-openai-llm-vector-langchain](https://github.com/kimtth/azure-openai-llm-vector-langchain) | 173 |
|
410 |
+
|[mayooear/private-chatbot-mpt30b-langchain](https://github.com/mayooear/private-chatbot-mpt30b-langchain) | 173 |
|
411 |
+
|[zilliztech/akcio](https://github.com/zilliztech/akcio) | 172 |
|
412 |
+
|[jmpaz/promptlib](https://github.com/jmpaz/promptlib) | 172 |
|
413 |
+
|[ccurme/yolopandas](https://github.com/ccurme/yolopandas) | 172 |
|
414 |
+
|[joaomdmoura/CrewAI](https://github.com/joaomdmoura/CrewAI) | 170 |
|
415 |
+
|[katanaml/llm-mistral-invoice-cpu](https://github.com/katanaml/llm-mistral-invoice-cpu) | 170 |
|
416 |
+
|[chakkaradeep/pyCodeAGI](https://github.com/chakkaradeep/pyCodeAGI) | 170 |
|
417 |
+
|[mudler/LocalAGI](https://github.com/mudler/LocalAGI) | 167 |
|
418 |
+
|[dssjon/biblos](https://github.com/dssjon/biblos) | 165 |
|
419 |
+
|[kjappelbaum/gptchem](https://github.com/kjappelbaum/gptchem) | 165 |
|
420 |
+
|[xxw1995/chatglm3-finetune](https://github.com/xxw1995/chatglm3-finetune) | 164 |
|
421 |
+
|[ArjanCodes/examples](https://github.com/ArjanCodes/examples) | 163 |
|
422 |
+
|[AIAnytime/Llama2-Medical-Chatbot](https://github.com/AIAnytime/Llama2-Medical-Chatbot) | 163 |
|
423 |
+
|[RCGAI/SimplyRetrieve](https://github.com/RCGAI/SimplyRetrieve) | 162 |
|
424 |
+
|[langchain-ai/langchain-teacher](https://github.com/langchain-ai/langchain-teacher) | 162 |
|
425 |
+
|[menloparklab/falcon-langchain](https://github.com/menloparklab/falcon-langchain) | 162 |
|
426 |
+
|[flurb18/AgentOoba](https://github.com/flurb18/AgentOoba) | 162 |
|
427 |
+
|[homanp/vercel-langchain](https://github.com/homanp/vercel-langchain) | 161 |
|
428 |
+
|[jiran214/langup-ai](https://github.com/jiran214/langup-ai) | 160 |
|
429 |
+
|[JorisdeJong123/7-Days-of-LangChain](https://github.com/JorisdeJong123/7-Days-of-LangChain) | 160 |
|
430 |
+
|[GoogleCloudPlatform/data-analytics-golden-demo](https://github.com/GoogleCloudPlatform/data-analytics-golden-demo) | 159 |
|
431 |
+
|[positive666/Prompt-Can-Anything](https://github.com/positive666/Prompt-Can-Anything) | 159 |
|
432 |
+
|[luisroque/large_laguage_models](https://github.com/luisroque/large_laguage_models) | 159 |
|
433 |
+
|[mlops-for-all/mlops-for-all.github.io](https://github.com/mlops-for-all/mlops-for-all.github.io) | 158 |
|
434 |
+
|[wandb/wandbot](https://github.com/wandb/wandbot) | 158 |
|
435 |
+
|[elastic/elasticsearch-labs](https://github.com/elastic/elasticsearch-labs) | 157 |
|
436 |
+
|[shroominic/funcchain](https://github.com/shroominic/funcchain) | 157 |
|
437 |
+
|[deeppavlov/dream](https://github.com/deeppavlov/dream) | 156 |
|
438 |
+
|[mluogh/eastworld](https://github.com/mluogh/eastworld) | 154 |
|
439 |
+
|[georgesung/llm_qlora](https://github.com/georgesung/llm_qlora) | 154 |
|
440 |
+
|[RUC-GSAI/YuLan-Rec](https://github.com/RUC-GSAI/YuLan-Rec) | 153 |
|
441 |
+
|[KylinC/ChatFinance](https://github.com/KylinC/ChatFinance) | 152 |
|
442 |
+
|[Dicklesworthstone/llama2_aided_tesseract](https://github.com/Dicklesworthstone/llama2_aided_tesseract) | 152 |
|
443 |
+
|[c0sogi/LLMChat](https://github.com/c0sogi/LLMChat) | 152 |
|
444 |
+
|[eunomia-bpf/GPTtrace](https://github.com/eunomia-bpf/GPTtrace) | 152 |
|
445 |
+
|[ErikBjare/gptme](https://github.com/ErikBjare/gptme) | 152 |
|
446 |
+
|[Klingefjord/chatgpt-telegram](https://github.com/Klingefjord/chatgpt-telegram) | 152 |
|
447 |
+
|[RoboCoachTechnologies/ROScribe](https://github.com/RoboCoachTechnologies/ROScribe) | 151 |
|
448 |
+
|[Aggregate-Intellect/sherpa](https://github.com/Aggregate-Intellect/sherpa) | 151 |
|
449 |
+
|[3Alan/DocsMind](https://github.com/3Alan/DocsMind) | 151 |
|
450 |
+
|[tangqiaoyu/ToolAlpaca](https://github.com/tangqiaoyu/ToolAlpaca) | 150 |
|
451 |
+
|[kulltc/chatgpt-sql](https://github.com/kulltc/chatgpt-sql) | 150 |
|
452 |
+
|[mallahyari/drqa](https://github.com/mallahyari/drqa) | 150 |
|
453 |
+
|[MedalCollector/Orator](https://github.com/MedalCollector/Orator) | 149 |
|
454 |
+
|[Teahouse-Studios/akari-bot](https://github.com/Teahouse-Studios/akari-bot) | 149 |
|
455 |
+
|[realminchoi/babyagi-ui](https://github.com/realminchoi/babyagi-ui) | 148 |
|
456 |
+
|[ssheng/BentoChain](https://github.com/ssheng/BentoChain) | 148 |
|
457 |
+
|[solana-labs/chatgpt-plugin](https://github.com/solana-labs/chatgpt-plugin) | 147 |
|
458 |
+
|[aurelio-labs/arxiv-bot](https://github.com/aurelio-labs/arxiv-bot) | 147 |
|
459 |
+
|[Jaseci-Labs/jaseci](https://github.com/Jaseci-Labs/jaseci) | 146 |
|
460 |
+
|[menloparklab/langchain-cohere-qdrant-doc-retrieval](https://github.com/menloparklab/langchain-cohere-qdrant-doc-retrieval) | 146 |
|
461 |
+
|[trancethehuman/entities-extraction-web-scraper](https://github.com/trancethehuman/entities-extraction-web-scraper) | 144 |
|
462 |
+
|[peterw/StoryStorm](https://github.com/peterw/StoryStorm) | 144 |
|
463 |
+
|[grumpyp/chroma-langchain-tutorial](https://github.com/grumpyp/chroma-langchain-tutorial) | 144 |
|
464 |
+
|[gh18l/CrawlGPT](https://github.com/gh18l/CrawlGPT) | 142 |
|
465 |
+
|[langchain-ai/langchain-aws-template](https://github.com/langchain-ai/langchain-aws-template) | 142 |
|
466 |
+
|[yasyf/summ](https://github.com/yasyf/summ) | 141 |
|
467 |
+
|[petehunt/langchain-github-bot](https://github.com/petehunt/langchain-github-bot) | 141 |
|
468 |
+
|[hirokidaichi/wanna](https://github.com/hirokidaichi/wanna) | 140 |
|
469 |
+
|[jina-ai/fastapi-serve](https://github.com/jina-ai/fastapi-serve) | 139 |
|
470 |
+
|[zenml-io/zenml-projects](https://github.com/zenml-io/zenml-projects) | 139 |
|
471 |
+
|[jlonge4/local_llama](https://github.com/jlonge4/local_llama) | 139 |
|
472 |
+
|[smyja/blackmaria](https://github.com/smyja/blackmaria) | 138 |
|
473 |
+
|[ChuloAI/BrainChulo](https://github.com/ChuloAI/BrainChulo) | 137 |
|
474 |
+
|[log1stics/voice-generator-webui](https://github.com/log1stics/voice-generator-webui) | 137 |
|
475 |
+
|[davila7/file-gpt](https://github.com/davila7/file-gpt) | 137 |
|
476 |
+
|[dcaribou/transfermarkt-datasets](https://github.com/dcaribou/transfermarkt-datasets) | 136 |
|
477 |
+
|[ciare-robotics/world-creator](https://github.com/ciare-robotics/world-creator) | 135 |
|
478 |
+
|[Undertone0809/promptulate](https://github.com/Undertone0809/promptulate) | 134 |
|
479 |
+
|[fixie-ai/fixie-examples](https://github.com/fixie-ai/fixie-examples) | 134 |
|
480 |
+
|[run-llama/ai-engineer-workshop](https://github.com/run-llama/ai-engineer-workshop) | 133 |
|
481 |
+
|[definitive-io/code-indexer-loop](https://github.com/definitive-io/code-indexer-loop) | 131 |
|
482 |
+
|[mortium91/langchain-assistant](https://github.com/mortium91/langchain-assistant) | 131 |
|
483 |
+
|[baidubce/bce-qianfan-sdk](https://github.com/baidubce/bce-qianfan-sdk) | 130 |
|
484 |
+
|[Ngonie-x/langchain_csv](https://github.com/Ngonie-x/langchain_csv) | 130 |
|
485 |
+
|[IvanIsCoding/ResuLLMe](https://github.com/IvanIsCoding/ResuLLMe) | 130 |
|
486 |
+
|[AnchoringAI/anchoring-ai](https://github.com/AnchoringAI/anchoring-ai) | 129 |
|
487 |
+
|[Azure/business-process-automation](https://github.com/Azure/business-process-automation) | 128 |
|
488 |
+
|[athina-ai/athina-sdk](https://github.com/athina-ai/athina-sdk) | 126 |
|
489 |
+
|[thunlp/ChatEval](https://github.com/thunlp/ChatEval) | 126 |
|
490 |
+
|[prof-frink-lab/slangchain](https://github.com/prof-frink-lab/slangchain) | 126 |
|
491 |
+
|[vietanhdev/pautobot](https://github.com/vietanhdev/pautobot) | 125 |
|
492 |
+
|[awslabs/generative-ai-cdk-constructs](https://github.com/awslabs/generative-ai-cdk-constructs) | 124 |
|
493 |
+
|[sdaaron/QueryGPT](https://github.com/sdaaron/QueryGPT) | 124 |
|
494 |
+
|[rabbitmetrics/langchain-13-min](https://github.com/rabbitmetrics/langchain-13-min) | 124 |
|
495 |
+
|[AutoLLM/AutoAgents](https://github.com/AutoLLM/AutoAgents) | 122 |
|
496 |
+
|[nicknochnack/Nopenai](https://github.com/nicknochnack/Nopenai) | 122 |
|
497 |
+
|[wombyz/HormoziGPT](https://github.com/wombyz/HormoziGPT) | 122 |
|
498 |
+
|[dotvignesh/PDFChat](https://github.com/dotvignesh/PDFChat) | 122 |
|
499 |
+
|[topoteretes/PromethAI-Backend](https://github.com/topoteretes/PromethAI-Backend) | 121 |
|
500 |
+
|[nftblackmagic/flask-langchain](https://github.com/nftblackmagic/flask-langchain) | 121 |
|
501 |
+
|[vishwasg217/finsight](https://github.com/vishwasg217/finsight) | 120 |
|
502 |
+
|[snap-stanford/MLAgentBench](https://github.com/snap-stanford/MLAgentBench) | 120 |
|
503 |
+
|[Azure/app-service-linux-docs](https://github.com/Azure/app-service-linux-docs) | 120 |
|
504 |
+
|[nyanp/chat2plot](https://github.com/nyanp/chat2plot) | 120 |
|
505 |
+
|[ant4g0nist/polar](https://github.com/ant4g0nist/polar) | 119 |
|
506 |
+
|[aws-samples/cdk-eks-blueprints-patterns](https://github.com/aws-samples/cdk-eks-blueprints-patterns) | 119 |
|
507 |
+
|[aws-samples/amazon-kendra-langchain-extensions](https://github.com/aws-samples/amazon-kendra-langchain-extensions) | 119 |
|
508 |
+
|[Xueheng-Li/SynologyChatbotGPT](https://github.com/Xueheng-Li/SynologyChatbotGPT) | 119 |
|
509 |
+
|[CodeAlchemyAI/ViLT-GPT](https://github.com/CodeAlchemyAI/ViLT-GPT) | 117 |
|
510 |
+
|[Lin-jun-xiang/docGPT-langchain](https://github.com/Lin-jun-xiang/docGPT-langchain) | 117 |
|
511 |
+
|[ademakdogan/ChatSQL](https://github.com/ademakdogan/ChatSQL) | 116 |
|
512 |
+
|[aniketmaurya/llm-inference](https://github.com/aniketmaurya/llm-inference) | 115 |
|
513 |
+
|[xuwenhao/mactalk-ai-course](https://github.com/xuwenhao/mactalk-ai-course) | 115 |
|
514 |
+
|[cmooredev/RepoReader](https://github.com/cmooredev/RepoReader) | 115 |
|
515 |
+
|[abi/autocommit](https://github.com/abi/autocommit) | 115 |
|
516 |
+
|[MIDORIBIN/langchain-gpt4free](https://github.com/MIDORIBIN/langchain-gpt4free) | 114 |
|
517 |
+
|[finaldie/auto-news](https://github.com/finaldie/auto-news) | 114 |
|
518 |
+
|[Anil-matcha/Youtube-to-chatbot](https://github.com/Anil-matcha/Youtube-to-chatbot) | 114 |
|
519 |
+
|[avrabyt/MemoryBot](https://github.com/avrabyt/MemoryBot) | 114 |
|
520 |
+
|[Capsize-Games/airunner](https://github.com/Capsize-Games/airunner) | 113 |
|
521 |
+
|[atisharma/llama_farm](https://github.com/atisharma/llama_farm) | 113 |
|
522 |
+
|[mbchang/data-driven-characters](https://github.com/mbchang/data-driven-characters) | 112 |
|
523 |
+
|[fiddler-labs/fiddler-auditor](https://github.com/fiddler-labs/fiddler-auditor) | 112 |
|
524 |
+
|[dirkjbreeuwer/gpt-automated-web-scraper](https://github.com/dirkjbreeuwer/gpt-automated-web-scraper) | 111 |
|
525 |
+
|[Appointat/Chat-with-Document-s-using-ChatGPT-API-and-Text-Embedding](https://github.com/Appointat/Chat-with-Document-s-using-ChatGPT-API-and-Text-Embedding) | 111 |
|
526 |
+
|[hwchase17/langchain-gradio-template](https://github.com/hwchase17/langchain-gradio-template) | 111 |
|
527 |
+
|[artas728/spelltest](https://github.com/artas728/spelltest) | 110 |
|
528 |
+
|[NVIDIA/GenerativeAIExamples](https://github.com/NVIDIA/GenerativeAIExamples) | 109 |
|
529 |
+
|[Azure/aistudio-copilot-sample](https://github.com/Azure/aistudio-copilot-sample) | 108 |
|
530 |
+
|[codefuse-ai/codefuse-chatbot](https://github.com/codefuse-ai/codefuse-chatbot) | 108 |
|
531 |
+
|[apirrone/Memento](https://github.com/apirrone/Memento) | 108 |
|
532 |
+
|[e-johnstonn/GPT-Doc-Summarizer](https://github.com/e-johnstonn/GPT-Doc-Summarizer) | 108 |
|
533 |
+
|[salesforce/BOLAA](https://github.com/salesforce/BOLAA) | 107 |
|
534 |
+
|[Erol444/gpt4-openai-api](https://github.com/Erol444/gpt4-openai-api) | 106 |
|
535 |
+
|[linjungz/chat-with-your-doc](https://github.com/linjungz/chat-with-your-doc) | 106 |
|
536 |
+
|[crosleythomas/MirrorGPT](https://github.com/crosleythomas/MirrorGPT) | 106 |
|
537 |
+
|[panaverse/learn-generative-ai](https://github.com/panaverse/learn-generative-ai) | 105 |
|
538 |
+
|[Azure/azure-sdk-tools](https://github.com/Azure/azure-sdk-tools) | 105 |
|
539 |
+
|[malywut/gpt_examples](https://github.com/malywut/gpt_examples) | 105 |
|
540 |
+
|[ritun16/chain-of-verification](https://github.com/ritun16/chain-of-verification) | 104 |
|
541 |
+
|[langchain-ai/langchain-benchmarks](https://github.com/langchain-ai/langchain-benchmarks) | 104 |
|
542 |
+
|[lightninglabs/LangChainBitcoin](https://github.com/lightninglabs/LangChainBitcoin) | 104 |
|
543 |
+
|[flepied/second-brain-agent](https://github.com/flepied/second-brain-agent) | 103 |
|
544 |
+
|[llmapp/openai.mini](https://github.com/llmapp/openai.mini) | 102 |
|
545 |
+
|[gimlet-ai/tddGPT](https://github.com/gimlet-ai/tddGPT) | 102 |
|
546 |
+
|[jlonge4/gpt_chatwithPDF](https://github.com/jlonge4/gpt_chatwithPDF) | 102 |
|
547 |
+
|[agentification/RAFA_code](https://github.com/agentification/RAFA_code) | 101 |
|
548 |
+
|[pacman100/DHS-LLM-Workshop](https://github.com/pacman100/DHS-LLM-Workshop) | 101 |
|
549 |
+
|[aws-samples/private-llm-qa-bot](https://github.com/aws-samples/private-llm-qa-bot) | 101 |
|
550 |
+
|
551 |
+
|
552 |
+
_Generated by [github-dependents-info](https://github.com/nvuillam/github-dependents-info)_
|
553 |
+
|
554 |
+
`github-dependents-info --repo "langchain-ai/langchain" --markdownfile dependents.md --minstars 100 --sort stars`
|
langchain_md_files/additional_resources/tutorials.mdx
ADDED
@@ -0,0 +1,52 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# 3rd Party Tutorials
|
2 |
+
|
3 |
+
## Tutorials
|
4 |
+
|
5 |
+
### [LangChain v 0.1 by LangChain.ai](https://www.youtube.com/playlist?list=PLfaIDFEXuae0gBSJ9T0w7cu7iJZbH3T31)
|
6 |
+
### [Build with Langchain - Advanced by LangChain.ai](https://www.youtube.com/playlist?list=PLfaIDFEXuae06tclDATrMYY0idsTdLg9v)
|
7 |
+
### [LangGraph by LangChain.ai](https://www.youtube.com/playlist?list=PLfaIDFEXuae16n2TWUkKq5PgJ0w6Pkwtg)
|
8 |
+
### [by Greg Kamradt](https://www.youtube.com/playlist?list=PLqZXAkvF1bPNQER9mLmDbntNfSpzdDIU5)
|
9 |
+
### [by Sam Witteveen](https://www.youtube.com/playlist?list=PL8motc6AQftk1Bs42EW45kwYbyJ4jOdiZ)
|
10 |
+
### [by James Briggs](https://www.youtube.com/playlist?list=PLIUOU7oqGTLieV9uTIFMm6_4PXg-hlN6F)
|
11 |
+
### [by Prompt Engineering](https://www.youtube.com/playlist?list=PLVEEucA9MYhOu89CX8H3MBZqayTbcCTMr)
|
12 |
+
### [by Mayo Oshin](https://www.youtube.com/@chatwithdata/search?query=langchain)
|
13 |
+
### [by 1 little Coder](https://www.youtube.com/playlist?list=PLpdmBGJ6ELUK-v0MK-t4wZmVEbxM5xk6L)
|
14 |
+
### [by BobLin (Chinese language)](https://www.youtube.com/playlist?list=PLbd7ntv6PxC3QMFQvtWfk55p-Op_syO1C)
|
15 |
+
### [by Total Technology Zonne](https://youtube.com/playlist?list=PLI8raxzYtfGyE02fAxiM1CPhLUuqcTLWg&si=fkAye16rQKBJVHc9)
|
16 |
+
|
17 |
+
## Courses
|
18 |
+
|
19 |
+
### Featured courses on Deeplearning.AI
|
20 |
+
|
21 |
+
- [LangChain for LLM Application Development](https://www.deeplearning.ai/short-courses/langchain-for-llm-application-development/)
|
22 |
+
- [LangChain Chat with Your Data](https://www.deeplearning.ai/short-courses/langchain-chat-with-your-data/)
|
23 |
+
- [Functions, Tools and Agents with LangChain](https://www.deeplearning.ai/short-courses/functions-tools-agents-langchain/)
|
24 |
+
- [Build LLM Apps with LangChain.js](https://www.deeplearning.ai/short-courses/build-llm-apps-with-langchain-js/)
|
25 |
+
|
26 |
+
### Online courses
|
27 |
+
|
28 |
+
- [Udemy](https://www.udemy.com/courses/search/?q=langchain)
|
29 |
+
- [DataCamp](https://www.datacamp.com/courses/developing-llm-applications-with-langchain)
|
30 |
+
- [Pluralsight](https://www.pluralsight.com/search?q=langchain)
|
31 |
+
- [Coursera](https://www.coursera.org/search?query=langchain)
|
32 |
+
- [Maven](https://maven.com/courses?query=langchain)
|
33 |
+
- [Udacity](https://www.udacity.com/catalog/all/any-price/any-school/any-skill/any-difficulty/any-duration/any-type/relevance/page-1?searchValue=langchain)
|
34 |
+
- [LinkedIn Learning](https://www.linkedin.com/search/results/learning/?keywords=langchain)
|
35 |
+
- [edX](https://www.edx.org/search?q=langchain)
|
36 |
+
- [freeCodeCamp](https://www.youtube.com/@freecodecamp/search?query=langchain)
|
37 |
+
|
38 |
+
## Short Tutorials
|
39 |
+
|
40 |
+
- [by Nicholas Renotte](https://youtu.be/MlK6SIjcjE8)
|
41 |
+
- [by Patrick Loeber](https://youtu.be/LbT1yp6quS8)
|
42 |
+
- [by Rabbitmetrics](https://youtu.be/aywZrzNaKjs)
|
43 |
+
- [by Ivan Reznikov](https://medium.com/@ivanreznikov/langchain-101-course-updated-668f7b41d6cb)
|
44 |
+
|
45 |
+
## Books and Handbooks
|
46 |
+
|
47 |
+
- [Generative AI with LangChain](https://www.amazon.com/Generative-AI-LangChain-language-ChatGPT/dp/1835083463/ref=sr_1_1?crid=1GMOMH0G7GLR&keywords=generative+ai+with+langchain&qid=1703247181&sprefix=%2Caps%2C298&sr=8-1) by [Ben Auffrath](https://www.amazon.com/stores/Ben-Auffarth/author/B08JQKSZ7D?ref=ap_rdr&store_ref=ap_rdr&isDramIntegrated=true&shoppingPortalEnabled=true), ©️ 2023 Packt Publishing
|
48 |
+
- [LangChain AI Handbook](https://www.pinecone.io/learn/langchain/) By **James Briggs** and **Francisco Ingham**
|
49 |
+
- [LangChain Cheatsheet](https://pub.towardsai.net/langchain-cheatsheet-all-secrets-on-a-single-page-8be26b721cde) by **Ivan Reznikov**
|
50 |
+
- [Dive into Langchain (Chinese language)](https://langchain.boblin.app/)
|
51 |
+
|
52 |
+
---------------------
|
langchain_md_files/additional_resources/youtube.mdx
ADDED
@@ -0,0 +1,63 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# YouTube videos
|
2 |
+
|
3 |
+
[Updated 2024-05-16]
|
4 |
+
|
5 |
+
### [Official LangChain YouTube channel](https://www.youtube.com/@LangChain)
|
6 |
+
|
7 |
+
### [Tutorials on YouTube](/docs/additional_resources/tutorials/#tutorials)
|
8 |
+
|
9 |
+
## Videos (sorted by views)
|
10 |
+
|
11 |
+
Only videos with 40K+ views:
|
12 |
+
|
13 |
+
- [Using `ChatGPT` with YOUR OWN Data. This is magical. (LangChain `OpenAI API`)](https://youtu.be/9AXP7tCI9PI)
|
14 |
+
- [Chat with Multiple `PDFs` | LangChain App Tutorial in Python (Free LLMs and Embeddings)](https://youtu.be/dXxQ0LR-3Hg?si=pjXKhsHRzn10vOqX)
|
15 |
+
- [`Hugging Face` + Langchain in 5 mins | Access 200k+ FREE AI models for your AI apps](https://youtu.be/_j7JEDWuqLE?si=psimQscN3qo2dOa9)
|
16 |
+
- [LangChain Crash Course For Beginners | LangChain Tutorial](https://youtu.be/nAmC7SoVLd8?si=qJdvyG5-rnjqfdj1)
|
17 |
+
- [Vector Embeddings Tutorial – Code Your Own AI Assistant with GPT-4 API + LangChain + NLP](https://youtu.be/yfHHvmaMkcA?si=UBP3yw50cLm3a2nj)
|
18 |
+
- [Development with Large Language Models Tutorial – `OpenAI`, Langchain, Agents, `Chroma`](https://youtu.be/xZDB1naRUlk?si=v8J1q6oFHRyTkf7Y)
|
19 |
+
- [Langchain: `PDF` Chat App (GUI) | ChatGPT for Your PDF FILES | Step-by-Step Tutorial](https://youtu.be/RIWbalZ7sTo?si=LbKsCcuyv0BtnrTY)
|
20 |
+
- [Vector Search `RAG` Tutorial – Combine Your Data with LLMs with Advanced Search](https://youtu.be/JEBDfGqrAUA?si=pD7oxpfwWeJCxfBt)
|
21 |
+
- [LangChain Crash Course for Beginners](https://youtu.be/lG7Uxts9SXs?si=Yte4S5afN7KNCw0F)
|
22 |
+
- [Learn `RAG` From Scratch – Python AI Tutorial from a LangChain Engineer](https://youtu.be/sVcwVQRHIc8?si=_LN4g0vOgSdtlB3S)
|
23 |
+
- [`Llama 2` in LangChain — FIRST Open Source Conversational Agent!](https://youtu.be/6iHVJyX2e50?si=rtq1maPrzWKHbwVV)
|
24 |
+
- [LangChain Tutorial for Beginners | Generative AI Series](https://youtu.be/cQUUkZnyoD0?si=KYz-bvcocdqGh9f_)
|
25 |
+
- [Chatbots with `RAG`: LangChain Full Walkthrough](https://youtu.be/LhnCsygAvzY?si=yS7T98VLfcWdkDek)
|
26 |
+
- [LangChain Explained In 15 Minutes - A MUST Learn For Python Programmers](https://youtu.be/mrjq3lFz23s?si=wkQGcSKUJjuiiEPf)
|
27 |
+
- [LLM Project | End to End LLM Project Using Langchain, `OpenAI` in Finance Domain](https://youtu.be/MoqgmWV1fm8?si=oVl-5kJVgd3a07Y_)
|
28 |
+
- [What is LangChain?](https://youtu.be/1bUy-1hGZpI?si=NZ0D51VM5y-DhjGe)
|
29 |
+
- [`RAG` + Langchain Python Project: Easy AI/Chat For Your Doc](https://youtu.be/tcqEUSNCn8I?si=RLcWPBVLIErRqdmU)
|
30 |
+
- [Getting Started With LangChain In 20 Minutes- Build Celebrity Search Application](https://youtu.be/_FpT1cwcSLg?si=X9qVazlXYucN_JBP)
|
31 |
+
- [LangChain GEN AI Tutorial – 6 End-to-End Projects using OpenAI, Google `Gemini Pro`, `LLAMA2`](https://youtu.be/x0AnCE9SE4A?si=_92gJYm7kb-V2bi0)
|
32 |
+
- [Complete Langchain GEN AI Crash Course With 6 End To End LLM Projects With OPENAI, `LLAMA2`, `Gemini Pro`](https://youtu.be/aWKrL4z5H6w?si=NVLi7Yiq0ccE7xXE)
|
33 |
+
- [AI Leader Reveals The Future of AI AGENTS (LangChain CEO)](https://youtu.be/9ZhbA0FHZYc?si=1r4P6kRvKVvEhRgE)
|
34 |
+
- [Learn How To Query Pdf using Langchain Open AI in 5 min](https://youtu.be/5Ghv-F1wF_0?si=ZZRjrWfeiFOVrcvu)
|
35 |
+
- [Reliable, fully local RAG agents with `LLaMA3`](https://youtu.be/-ROS6gfYIts?si=75CXA8W_BbnkIxcV)
|
36 |
+
- [Learn `LangChain.js` - Build LLM apps with JavaScript and `OpenAI`](https://youtu.be/HSZ_uaif57o?si=Icj-RAhwMT-vHaYA)
|
37 |
+
- [LLM Project | End to End LLM Project Using LangChain, Google Palm In Ed-Tech Industry](https://youtu.be/AjQPRomyd-k?si=eC3NT6kn02Lhpz-_)
|
38 |
+
- [Chatbot Answering from Your Own Knowledge Base: Langchain, `ChatGPT`, `Pinecone`, and `Streamlit`: | Code](https://youtu.be/nAKhxQ3hcMA?si=9Zd_Nd_jiYhtml5w)
|
39 |
+
- [LangChain is AMAZING | Quick Python Tutorial](https://youtu.be/I4mFqyqFkxg?si=aJ66qh558OfNAczD)
|
40 |
+
- [`GirlfriendGPT` - AI girlfriend with LangChain](https://youtu.be/LiN3D1QZGQw?si=kZR-lnJwixeVrjmh)
|
41 |
+
- [Using NEW `MPT-7B` in `Hugging Face` and LangChain](https://youtu.be/DXpk9K7DgMo?si=99JDpV_ueimwJhMi)
|
42 |
+
- [LangChain - COMPLETE TUTORIAL - Basics to advanced concept!](https://youtu.be/a89vqgK-Qcs?si=0aVO2EOqsw7GE5e3)
|
43 |
+
- [LangChain Agents: Simply Explained!](https://youtu.be/Xi9Ui-9qcPw?si=DCuG7nGx8dxcfhkx)
|
44 |
+
- [Chat With Multiple `PDF` Documents With Langchain And Google `Gemini Pro`](https://youtu.be/uus5eLz6smA?si=YUwvHtaZsGeIl0WD)
|
45 |
+
- [LLM Project | End to end LLM project Using Langchain, `Google Palm` in Retail Industry](https://youtu.be/4wtrl4hnPT8?si=_eOKPpdLfWu5UXMQ)
|
46 |
+
- [Tutorial | Chat with any Website using Python and Langchain](https://youtu.be/bupx08ZgSFg?si=KRrjYZFnuLsstGwW)
|
47 |
+
- [Prompt Engineering And LLM's With LangChain In One Shot-Generative AI](https://youtu.be/t2bSApmPzU4?si=87vPQQtYEWTyu2Kx)
|
48 |
+
- [Build a Custom Chatbot with `OpenAI`: `GPT-Index` & LangChain | Step-by-Step Tutorial](https://youtu.be/FIDv6nc4CgU?si=gR1u3DUG9lvzBIKK)
|
49 |
+
- [Search Your `PDF` App using Langchain, `ChromaDB`, and Open Source LLM: No OpenAI API (Runs on CPU)](https://youtu.be/rIV1EseKwU4?si=UxZEoXSiPai8fXgl)
|
50 |
+
- [Building a `RAG` application from scratch using Python, LangChain, and the `OpenAI API`](https://youtu.be/BrsocJb-fAo?si=hvkh9iTGzJ-LnsX-)
|
51 |
+
- [Function Calling via `ChatGPT API` - First Look With LangChain](https://youtu.be/0-zlUy7VUjg?si=Vc6LFseckEc6qvuk)
|
52 |
+
- [Private GPT, free deployment! Langchain-Chachat helps you easily play with major mainstream AI models! | Zero Degree Commentary](https://youtu.be/3LLUyaHP-3I?si=AZumEeFXsvqaLl0f)
|
53 |
+
- [Create a ChatGPT clone using `Streamlit` and LangChain](https://youtu.be/IaTiyQ2oYUQ?si=WbgsYmqPDnMidSUK)
|
54 |
+
- [What's next for AI agents ft. LangChain's Harrison Chase](https://youtu.be/pBBe1pk8hf4?si=H4vdBF9nmkNZxiHt)
|
55 |
+
- [`LangFlow`: Build Chatbots without Writing Code - LangChain](https://youtu.be/KJ-ux3hre4s?si=TJuDu4bAlva1myNL)
|
56 |
+
- [Building a LangChain Custom Medical Agent with Memory](https://youtu.be/6UFtRwWnHws?si=wymYad26VgigRkHy)
|
57 |
+
- [`Ollama` meets LangChain](https://youtu.be/k_1pOF1mj8k?si=RlBiCrmaR3s7SnMK)
|
58 |
+
- [End To End LLM Langchain Project using `Pinecone` Vector Database](https://youtu.be/erUfLIi9OFM?si=aHpuHXdIEmAfS4eF)
|
59 |
+
- [`LLaMA2` with LangChain - Basics | LangChain TUTORIAL](https://youtu.be/cIRzwSXB4Rc?si=FUs0OLVJpzKhut0h)
|
60 |
+
- [Understanding `ReACT` with LangChain](https://youtu.be/Eug2clsLtFs?si=imgj534ggxlypS0d)
|
61 |
+
|
62 |
+
---------------------
|
63 |
+
[Updated 2024-05-16]
|
langchain_md_files/changes/changelog/core.mdx
ADDED
@@ -0,0 +1,10 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# langchain-core
|
2 |
+
|
3 |
+
## 0.1.x
|
4 |
+
|
5 |
+
#### Deprecated
|
6 |
+
|
7 |
+
- `BaseChatModel` methods `__call__`, `call_as_llm`, `predict`, `predict_messages`. Will be removed in 0.2.0. Use `BaseChatModel.invoke` instead.
|
8 |
+
- `BaseChatModel` methods `apredict`, `apredict_messages`. Will be removed in 0.2.0. Use `BaseChatModel.ainvoke` instead.
|
9 |
+
- `BaseLLM` methods `__call__, `predict`, `predict_messages`. Will be removed in 0.2.0. Use `BaseLLM.invoke` instead.
|
10 |
+
- `BaseLLM` methods `apredict`, `apredict_messages`. Will be removed in 0.2.0. Use `BaseLLM.ainvoke` instead.
|
langchain_md_files/changes/changelog/langchain.mdx
ADDED
@@ -0,0 +1,93 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# langchain
|
2 |
+
|
3 |
+
## 0.2.0
|
4 |
+
|
5 |
+
### Deleted
|
6 |
+
|
7 |
+
As of release 0.2.0, `langchain` is required to be integration-agnostic. This means that code in `langchain` should not by default instantiate any specific chat models, llms, embedding models, vectorstores etc; instead, the user will be required to specify those explicitly.
|
8 |
+
|
9 |
+
The following functions and classes require an explicit LLM to be passed as an argument:
|
10 |
+
|
11 |
+
- `langchain.agents.agent_toolkits.vectorstore.toolkit.VectorStoreToolkit`
|
12 |
+
- `langchain.agents.agent_toolkits.vectorstore.toolkit.VectorStoreRouterToolkit`
|
13 |
+
- `langchain.chains.openai_functions.get_openapi_chain`
|
14 |
+
- `langchain.chains.router.MultiRetrievalQAChain.from_retrievers`
|
15 |
+
- `langchain.indexes.VectorStoreIndexWrapper.query`
|
16 |
+
- `langchain.indexes.VectorStoreIndexWrapper.query_with_sources`
|
17 |
+
- `langchain.indexes.VectorStoreIndexWrapper.aquery_with_sources`
|
18 |
+
- `langchain.chains.flare.FlareChain`
|
19 |
+
|
20 |
+
The following classes now require passing an explicit Embedding model as an argument:
|
21 |
+
|
22 |
+
- `langchain.indexes.VectostoreIndexCreator`
|
23 |
+
|
24 |
+
The following code has been removed:
|
25 |
+
|
26 |
+
- `langchain.natbot.NatBotChain.from_default` removed in favor of the `from_llm` class method.
|
27 |
+
|
28 |
+
### Deprecated
|
29 |
+
|
30 |
+
We have two main types of deprecations:
|
31 |
+
|
32 |
+
1. Code that was moved from `langchain` into another package (e.g, `langchain-community`)
|
33 |
+
|
34 |
+
If you try to import it from `langchain`, the import will keep on working, but will raise a deprecation warning. The warning will provide a replacement import statement.
|
35 |
+
|
36 |
+
```python
|
37 |
+
python -c "from langchain.document_loaders.markdown import UnstructuredMarkdownLoader"
|
38 |
+
|
39 |
+
```
|
40 |
+
|
41 |
+
```python
|
42 |
+
LangChainDeprecationWarning: Importing UnstructuredMarkdownLoader from langchain.document_loaders is deprecated. Please replace deprecated imports:
|
43 |
+
|
44 |
+
>> from langchain.document_loaders import UnstructuredMarkdownLoader
|
45 |
+
|
46 |
+
with new imports of:
|
47 |
+
|
48 |
+
>> from langchain_community.document_loaders import UnstructuredMarkdownLoader
|
49 |
+
```
|
50 |
+
|
51 |
+
We will continue supporting the imports in `langchain` until release 0.4 as long as the relevant package where the code lives is installed. (e.g., as long as `langchain_community` is installed.)
|
52 |
+
|
53 |
+
However, we advise for users to not rely on these imports and instead migrate to the new imports. To help with this process, we’re releasing a migration script via the LangChain CLI. See further instructions in migration guide.
|
54 |
+
|
55 |
+
1. Code that has better alternatives available and will eventually be removed, so there’s only a single way to do things. (e.g., `predict_messages` method in ChatModels has been deprecated in favor of `invoke`).
|
56 |
+
|
57 |
+
Many of these were marked for removal in 0.2. We have bumped the removal to 0.3.
|
58 |
+
|
59 |
+
|
60 |
+
## 0.1.0 (Jan 5, 2024)
|
61 |
+
|
62 |
+
### Deleted
|
63 |
+
|
64 |
+
No deletions.
|
65 |
+
|
66 |
+
### Deprecated
|
67 |
+
|
68 |
+
Deprecated classes and methods will be removed in 0.2.0
|
69 |
+
|
70 |
+
| Deprecated | Alternative | Reason |
|
71 |
+
|---------------------------------|-----------------------------------|------------------------------------------------|
|
72 |
+
| ChatVectorDBChain | ConversationalRetrievalChain | More general to all retrievers |
|
73 |
+
| create_ernie_fn_chain | create_ernie_fn_runnable | Use LCEL under the hood |
|
74 |
+
| created_structured_output_chain | create_structured_output_runnable | Use LCEL under the hood |
|
75 |
+
| NatBotChain | | Not used |
|
76 |
+
| create_openai_fn_chain | create_openai_fn_runnable | Use LCEL under the hood |
|
77 |
+
| create_structured_output_chain | create_structured_output_runnable | Use LCEL under the hood |
|
78 |
+
| load_query_constructor_chain | load_query_constructor_runnable | Use LCEL under the hood |
|
79 |
+
| VectorDBQA | RetrievalQA | More general to all retrievers |
|
80 |
+
| Sequential Chain | LCEL | Obviated by LCEL |
|
81 |
+
| SimpleSequentialChain | LCEL | Obviated by LCEL |
|
82 |
+
| TransformChain | LCEL/RunnableLambda | Obviated by LCEL |
|
83 |
+
| create_tagging_chain | create_structured_output_runnable | Use LCEL under the hood |
|
84 |
+
| ChatAgent | create_react_agent | Use LCEL builder over a class |
|
85 |
+
| ConversationalAgent | create_react_agent | Use LCEL builder over a class |
|
86 |
+
| ConversationalChatAgent | create_json_chat_agent | Use LCEL builder over a class |
|
87 |
+
| initialize_agent | Individual create agent methods | Individual create agent methods are more clear |
|
88 |
+
| ZeroShotAgent | create_react_agent | Use LCEL builder over a class |
|
89 |
+
| OpenAIFunctionsAgent | create_openai_functions_agent | Use LCEL builder over a class |
|
90 |
+
| OpenAIMultiFunctionsAgent | create_openai_tools_agent | Use LCEL builder over a class |
|
91 |
+
| SelfAskWithSearchAgent | create_self_ask_with_search | Use LCEL builder over a class |
|
92 |
+
| StructuredChatAgent | create_structured_chat_agent | Use LCEL builder over a class |
|
93 |
+
| XMLAgent | create_xml_agent | Use LCEL builder over a class |
|
langchain_md_files/concepts/agents.mdx
ADDED
@@ -0,0 +1,25 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Agents
|
2 |
+
|
3 |
+
By themselves, language models can't take actions - they just output text. Agents are systems that take a high-level task and use an LLM as a reasoning engine to decide what actions to take and execute those actions.
|
4 |
+
|
5 |
+
[LangGraph](/docs/concepts/architecture#langgraph) is an extension of LangChain specifically aimed at creating highly controllable and customizable agents. We recommend that you use LangGraph for building agents.
|
6 |
+
|
7 |
+
Please see the following resources for more information:
|
8 |
+
|
9 |
+
* LangGraph docs on [common agent architectures](https://langchain-ai.github.io/langgraph/concepts/agentic_concepts/)
|
10 |
+
* [Pre-built agents in LangGraph](https://langchain-ai.github.io/langgraph/reference/prebuilt/#langgraph.prebuilt.chat_agent_executor.create_react_agent)
|
11 |
+
|
12 |
+
## Legacy agent concept: AgentExecutor
|
13 |
+
|
14 |
+
LangChain previously introduced the `AgentExecutor` as a runtime for agents.
|
15 |
+
While it served as an excellent starting point, its limitations became apparent when dealing with more sophisticated and customized agents.
|
16 |
+
As a result, we're gradually phasing out `AgentExecutor` in favor of more flexible solutions in LangGraph.
|
17 |
+
|
18 |
+
### Transitioning from AgentExecutor to langgraph
|
19 |
+
|
20 |
+
If you're currently using `AgentExecutor`, don't worry! We've prepared resources to help you:
|
21 |
+
|
22 |
+
1. For those who still need to use `AgentExecutor`, we offer a comprehensive guide on [how to use AgentExecutor](/docs/how_to/agent_executor).
|
23 |
+
|
24 |
+
2. However, we strongly recommend transitioning to LangGraph for improved flexibility and control. To facilitate this transition, we've created a detailed [migration guide](/docs/how_to/migrate_agent) to help you move from `AgentExecutor` to LangGraph seamlessly.
|
25 |
+
|
langchain_md_files/concepts/architecture.mdx
ADDED
@@ -0,0 +1,78 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import ThemedImage from '@theme/ThemedImage';
|
2 |
+
import useBaseUrl from '@docusaurus/useBaseUrl';
|
3 |
+
|
4 |
+
# Architecture
|
5 |
+
|
6 |
+
LangChain is a framework that consists of a number of packages.
|
7 |
+
|
8 |
+
<ThemedImage
|
9 |
+
alt="Diagram outlining the hierarchical organization of the LangChain framework, displaying the interconnected parts across multiple layers."
|
10 |
+
sources={{
|
11 |
+
light: useBaseUrl('/svg/langchain_stack_112024.svg'),
|
12 |
+
dark: useBaseUrl('/svg/langchain_stack_112024_dark.svg'),
|
13 |
+
}}
|
14 |
+
title="LangChain Framework Overview"
|
15 |
+
style={{ width: "100%" }}
|
16 |
+
/>
|
17 |
+
|
18 |
+
|
19 |
+
## langchain-core
|
20 |
+
|
21 |
+
This package contains base abstractions for different components and ways to compose them together.
|
22 |
+
The interfaces for core components like chat models, vector stores, tools and more are defined here.
|
23 |
+
No third-party integrations are defined here.
|
24 |
+
The dependencies are very lightweight.
|
25 |
+
|
26 |
+
## langchain
|
27 |
+
|
28 |
+
The main `langchain` package contains chains and retrieval strategies that make up an application's cognitive architecture.
|
29 |
+
These are NOT third-party integrations.
|
30 |
+
All chains, agents, and retrieval strategies here are NOT specific to any one integration, but rather generic across all integrations.
|
31 |
+
|
32 |
+
## Integration packages
|
33 |
+
|
34 |
+
Popular integrations have their own packages (e.g. `langchain-openai`, `langchain-anthropic`, etc) so that they can be properly versioned and appropriately lightweight.
|
35 |
+
|
36 |
+
For more information see:
|
37 |
+
|
38 |
+
* A list [integrations packages](/docs/integrations/providers/)
|
39 |
+
* The [API Reference](https://python.langchain.com/api_reference/) where you can find detailed information about each of the integration package.
|
40 |
+
|
41 |
+
## langchain-community
|
42 |
+
|
43 |
+
This package contains third-party integrations that are maintained by the LangChain community.
|
44 |
+
Key integration packages are separated out (see above).
|
45 |
+
This contains integrations for various components (chat models, vector stores, tools, etc).
|
46 |
+
All dependencies in this package are optional to keep the package as lightweight as possible.
|
47 |
+
|
48 |
+
## langgraph
|
49 |
+
|
50 |
+
`langgraph` is an extension of `langchain` aimed at building robust and stateful multi-actor applications with LLMs by modeling steps as edges and nodes in a graph.
|
51 |
+
|
52 |
+
LangGraph exposes high level interfaces for creating common types of agents, as well as a low-level API for composing custom flows.
|
53 |
+
|
54 |
+
:::info[Further reading]
|
55 |
+
|
56 |
+
* See our LangGraph overview [here](https://langchain-ai.github.io/langgraph/concepts/high_level/#core-principles).
|
57 |
+
* See our LangGraph Academy Course [here](https://academy.langchain.com/courses/intro-to-langgraph).
|
58 |
+
|
59 |
+
:::
|
60 |
+
|
61 |
+
## langserve
|
62 |
+
|
63 |
+
A package to deploy LangChain chains as REST APIs. Makes it easy to get a production ready API up and running.
|
64 |
+
|
65 |
+
:::important
|
66 |
+
LangServe is designed to primarily deploy simple Runnables and work with well-known primitives in langchain-core.
|
67 |
+
|
68 |
+
If you need a deployment option for LangGraph, you should instead be looking at LangGraph Platform (beta) which will be better suited for deploying LangGraph applications.
|
69 |
+
:::
|
70 |
+
|
71 |
+
For more information, see the [LangServe documentation](/docs/langserve).
|
72 |
+
|
73 |
+
|
74 |
+
## LangSmith
|
75 |
+
|
76 |
+
A developer platform that lets you debug, test, evaluate, and monitor LLM applications.
|
77 |
+
|
78 |
+
For more information, see the [LangSmith documentation](https://docs.smith.langchain.com)
|
langchain_md_files/concepts/async.mdx
ADDED
@@ -0,0 +1,81 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Async programming with langchain
|
2 |
+
|
3 |
+
:::info Prerequisites
|
4 |
+
* [Runnable interface](/docs/concepts/runnables)
|
5 |
+
* [asyncio](https://docs.python.org/3/library/asyncio.html)
|
6 |
+
:::
|
7 |
+
|
8 |
+
LLM based applications often involve a lot of I/O-bound operations, such as making API calls to language models, databases, or other services. Asynchronous programming (or async programming) is a paradigm that allows a program to perform multiple tasks concurrently without blocking the execution of other tasks, improving efficiency and responsiveness, particularly in I/O-bound operations.
|
9 |
+
|
10 |
+
:::note
|
11 |
+
You are expected to be familiar with asynchronous programming in Python before reading this guide. If you are not, please find appropriate resources online to learn how to program asynchronously in Python.
|
12 |
+
This guide specifically focuses on what you need to know to work with LangChain in an asynchronous context, assuming that you are already familiar with asynch
|
13 |
+
:::
|
14 |
+
|
15 |
+
## Langchain asynchronous APIs
|
16 |
+
|
17 |
+
Many LangChain APIs are designed to be asynchronous, allowing you to build efficient and responsive applications.
|
18 |
+
|
19 |
+
Typically, any method that may perform I/O operations (e.g., making API calls, reading files) will have an asynchronous counterpart.
|
20 |
+
|
21 |
+
In LangChain, async implementations are located in the same classes as their synchronous counterparts, with the asynchronous methods having an "a" prefix. For example, the synchronous `invoke` method has an asynchronous counterpart called `ainvoke`.
|
22 |
+
|
23 |
+
Many components of LangChain implement the [Runnable Interface](/docs/concepts/runnables), which includes support for asynchronous execution. This means that you can run Runnables asynchronously using the `await` keyword in Python.
|
24 |
+
|
25 |
+
```python
|
26 |
+
await some_runnable.ainvoke(some_input)
|
27 |
+
```
|
28 |
+
|
29 |
+
Other components like [Embedding Models](/docs/concepts/embedding_models) and [VectorStore](/docs/concepts/vectorstores) that do not implement the [Runnable Interface](/docs/concepts/runnables) usually still follow the same rule and include the asynchronous version of method in the same class with an "a" prefix.
|
30 |
+
|
31 |
+
For example,
|
32 |
+
|
33 |
+
```python
|
34 |
+
await some_vectorstore.aadd_documents(documents)
|
35 |
+
```
|
36 |
+
|
37 |
+
Runnables created using the [LangChain Expression Language (LCEL)](/docs/concepts/lcel) can also be run asynchronously as they implement
|
38 |
+
the full [Runnable Interface](/docs/concepts/runnables).
|
39 |
+
|
40 |
+
For more information, please review the [API reference](https://python.langchain.com/api_reference/) for the specific component you are using.
|
41 |
+
|
42 |
+
## Delegation to sync methods
|
43 |
+
|
44 |
+
Most popular LangChain integrations implement asynchronous support of their APIs. For example, the `ainvoke` method of many ChatModel implementations uses the `httpx.AsyncClient` to make asynchronous HTTP requests to the model provider's API.
|
45 |
+
|
46 |
+
When an asynchronous implementation is not available, LangChain tries to provide a default implementation, even if it incurs
|
47 |
+
a **slight** overhead.
|
48 |
+
|
49 |
+
By default, LangChain will delegate the execution of unimplemented asynchronous methods to the synchronous counterparts. LangChain almost always assumes that the synchronous method should be treated as a blocking operation and should be run in a separate thread.
|
50 |
+
This is done using [asyncio.loop.run_in_executor](https://docs.python.org/3/library/asyncio-eventloop.html#asyncio.loop.run_in_executor) functionality provided by the `asyncio` library. LangChain uses the default executor provided by the `asyncio` library, which lazily initializes a thread pool executor with a default number of threads that is reused in the given event loop. While this strategy incurs a slight overhead due to context switching between threads, it guarantees that every asynchronous method has a default implementation that works out of the box.
|
51 |
+
|
52 |
+
## Performance
|
53 |
+
|
54 |
+
Async code in LangChain should generally perform relatively well with minimal overhead out of the box, and is unlikely
|
55 |
+
to be a bottleneck in most applications.
|
56 |
+
|
57 |
+
The two main sources of overhead are:
|
58 |
+
|
59 |
+
1. Cost of context switching between threads when [delegating to synchronous methods](#delegation-to-sync-methods). This can be addressed by providing a native asynchronous implementation.
|
60 |
+
2. In [LCEL](/docs/concepts/lcel) any "cheap functions" that appear as part of the chain will be either scheduled as tasks on the event loop (if they are async) or run in a separate thread (if they are sync), rather than just be run inline.
|
61 |
+
|
62 |
+
The latency overhead you should expect from these is between tens of microseconds to a few milliseconds.
|
63 |
+
|
64 |
+
A more common source of performance issues arises from users accidentally blocking the event loop by calling synchronous code in an async context (e.g., calling `invoke` rather than `ainvoke`).
|
65 |
+
|
66 |
+
## Compatibility
|
67 |
+
|
68 |
+
LangChain is only compatible with the `asyncio` library, which is distributed as part of the Python standard library. It will not work with other async libraries like `trio` or `curio`.
|
69 |
+
|
70 |
+
In Python 3.9 and 3.10, [asyncio's tasks](https://docs.python.org/3/library/asyncio-task.html#asyncio.create_task) did not
|
71 |
+
accept a `context` parameter. Due to this limitation, LangChain cannot automatically propagate the `RunnableConfig` down the call chain
|
72 |
+
in certain scenarios.
|
73 |
+
|
74 |
+
If you are experiencing issues with streaming, callbacks or tracing in async code and are using Python 3.9 or 3.10, this is a likely cause.
|
75 |
+
|
76 |
+
Please read [Propagation RunnableConfig](/docs/concepts/runnables/#propagation-of-runnableconfig) for more details to learn how to propagate the `RunnableConfig` down the call chain manually (or upgrade to Python 3.11 where this is no longer an issue).
|
77 |
+
|
78 |
+
## How to use in ipython and jupyter notebooks
|
79 |
+
|
80 |
+
As of IPython 7.0, IPython supports asynchronous REPLs. This means that you can use the `await` keyword in the IPython REPL and Jupyter Notebooks without any additional setup. For more information, see the [IPython blog post](https://blog.jupyter.org/ipython-7-0-async-repl-a35ce050f7f7).
|
81 |
+
|
langchain_md_files/concepts/callbacks.mdx
ADDED
@@ -0,0 +1,73 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Callbacks
|
2 |
+
|
3 |
+
:::note Prerequisites
|
4 |
+
- [Runnable interface](/docs/concepts/runnables)
|
5 |
+
:::
|
6 |
+
|
7 |
+
LangChain provides a callback system that allows you to hook into the various stages of your LLM application. This is useful for logging, monitoring, streaming, and other tasks.
|
8 |
+
|
9 |
+
You can subscribe to these events by using the `callbacks` argument available throughout the API. This argument is list of handler objects, which are expected to implement one or more of the methods described below in more detail.
|
10 |
+
|
11 |
+
## Callback events
|
12 |
+
|
13 |
+
| Event | Event Trigger | Associated Method |
|
14 |
+
|------------------|---------------------------------------------|-----------------------|
|
15 |
+
| Chat model start | When a chat model starts | `on_chat_model_start` |
|
16 |
+
| LLM start | When a llm starts | `on_llm_start` |
|
17 |
+
| LLM new token | When an llm OR chat model emits a new token | `on_llm_new_token` |
|
18 |
+
| LLM ends | When an llm OR chat model ends | `on_llm_end` |
|
19 |
+
| LLM errors | When an llm OR chat model errors | `on_llm_error` |
|
20 |
+
| Chain start | When a chain starts running | `on_chain_start` |
|
21 |
+
| Chain end | When a chain ends | `on_chain_end` |
|
22 |
+
| Chain error | When a chain errors | `on_chain_error` |
|
23 |
+
| Tool start | When a tool starts running | `on_tool_start` |
|
24 |
+
| Tool end | When a tool ends | `on_tool_end` |
|
25 |
+
| Tool error | When a tool errors | `on_tool_error` |
|
26 |
+
| Agent action | When an agent takes an action | `on_agent_action` |
|
27 |
+
| Agent finish | When an agent ends | `on_agent_finish` |
|
28 |
+
| Retriever start | When a retriever starts | `on_retriever_start` |
|
29 |
+
| Retriever end | When a retriever ends | `on_retriever_end` |
|
30 |
+
| Retriever error | When a retriever errors | `on_retriever_error` |
|
31 |
+
| Text | When arbitrary text is run | `on_text` |
|
32 |
+
| Retry | When a retry event is run | `on_retry` |
|
33 |
+
|
34 |
+
## Callback handlers
|
35 |
+
|
36 |
+
Callback handlers can either be `sync` or `async`:
|
37 |
+
|
38 |
+
* Sync callback handlers implement the [BaseCallbackHandler](https://python.langchain.com/api_reference/core/callbacks/langchain_core.callbacks.base.BaseCallbackHandler.html) interface.
|
39 |
+
* Async callback handlers implement the [AsyncCallbackHandler](https://python.langchain.com/api_reference/core/callbacks/langchain_core.callbacks.base.AsyncCallbackHandler.html) interface.
|
40 |
+
|
41 |
+
During run-time LangChain configures an appropriate callback manager (e.g., [CallbackManager](https://python.langchain.com/api_reference/core/callbacks/langchain_core.callbacks.manager.CallbackManager.html) or [AsyncCallbackManager](https://python.langchain.com/api_reference/core/callbacks/langchain_core.callbacks.manager.AsyncCallbackManager.html) which will be responsible for calling the appropriate method on each "registered" callback handler when the event is triggered.
|
42 |
+
|
43 |
+
## Passing callbacks
|
44 |
+
|
45 |
+
The `callbacks` property is available on most objects throughout the API (Models, Tools, Agents, etc.) in two different places:
|
46 |
+
|
47 |
+
- **Request time callbacks**: Passed at the time of the request in addition to the input data.
|
48 |
+
Available on all standard `Runnable` objects. These callbacks are INHERITED by all children
|
49 |
+
of the object they are defined on. For example, `chain.invoke({"number": 25}, {"callbacks": [handler]})`.
|
50 |
+
- **Constructor callbacks**: `chain = TheNameOfSomeChain(callbacks=[handler])`. These callbacks
|
51 |
+
are passed as arguments to the constructor of the object. The callbacks are scoped
|
52 |
+
only to the object they are defined on, and are **not** inherited by any children of the object.
|
53 |
+
|
54 |
+
:::warning
|
55 |
+
Constructor callbacks are scoped only to the object they are defined on. They are **not** inherited by children
|
56 |
+
of the object.
|
57 |
+
:::
|
58 |
+
|
59 |
+
If you're creating a custom chain or runnable, you need to remember to propagate request time
|
60 |
+
callbacks to any child objects.
|
61 |
+
|
62 |
+
:::important Async in Python<=3.10
|
63 |
+
|
64 |
+
Any `RunnableLambda`, a `RunnableGenerator`, or `Tool` that invokes other runnables
|
65 |
+
and is running `async` in python<=3.10, will have to propagate callbacks to child
|
66 |
+
objects manually. This is because LangChain cannot automatically propagate
|
67 |
+
callbacks to child objects in this case.
|
68 |
+
|
69 |
+
This is a common reason why you may fail to see events being emitted from custom
|
70 |
+
runnables or tools.
|
71 |
+
:::
|
72 |
+
|
73 |
+
For specifics on how to use callbacks, see the [relevant how-to guides here](/docs/how_to/#callbacks).
|
langchain_md_files/concepts/chat_history.mdx
ADDED
@@ -0,0 +1,46 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Chat history
|
2 |
+
|
3 |
+
:::info Prerequisites
|
4 |
+
|
5 |
+
- [Messages](/docs/concepts/messages)
|
6 |
+
- [Chat models](/docs/concepts/chat_models)
|
7 |
+
- [Tool calling](/docs/concepts/tool_calling)
|
8 |
+
:::
|
9 |
+
|
10 |
+
Chat history is a record of the conversation between the user and the chat model. It is used to maintain context and state throughout the conversation. The chat history is sequence of [messages](/docs/concepts/messages), each of which is associated with a specific [role](/docs/concepts/messages#role), such as "user", "assistant", "system", or "tool".
|
11 |
+
|
12 |
+
## Conversation patterns
|
13 |
+
|
14 |
+

|
15 |
+
|
16 |
+
Most conversations start with a **system message** that sets the context for the conversation. This is followed by a **user message** containing the user's input, and then an **assistant message** containing the model's response.
|
17 |
+
|
18 |
+
The **assistant** may respond directly to the user or if configured with tools request that a [tool](/docs/concepts/tool_calling) be invoked to perform a specific task.
|
19 |
+
|
20 |
+
A full conversation often involves a combination of two patterns of alternating messages:
|
21 |
+
|
22 |
+
1. The **user** and the **assistant** representing a back-and-forth conversation.
|
23 |
+
2. The **assistant** and **tool messages** representing an ["agentic" workflow](/docs/concepts/agents) where the assistant is invoking tools to perform specific tasks.
|
24 |
+
|
25 |
+
## Managing chat history
|
26 |
+
|
27 |
+
Since chat models have a maximum limit on input size, it's important to manage chat history and trim it as needed to avoid exceeding the [context window](/docs/concepts/chat_models/#context-window).
|
28 |
+
|
29 |
+
While processing chat history, it's essential to preserve a correct conversation structure.
|
30 |
+
|
31 |
+
Key guidelines for managing chat history:
|
32 |
+
|
33 |
+
- The conversation should follow one of these structures:
|
34 |
+
- The first message is either a "user" message or a "system" message, followed by a "user" and then an "assistant" message.
|
35 |
+
- The last message should be either a "user" message or a "tool" message containing the result of a tool call.
|
36 |
+
- When using [tool calling](/docs/concepts/tool_calling), a "tool" message should only follow an "assistant" message that requested the tool invocation.
|
37 |
+
|
38 |
+
:::tip
|
39 |
+
Understanding correct conversation structure is essential for being able to properly implement
|
40 |
+
[memory](https://langchain-ai.github.io/langgraph/concepts/memory/) in chat models.
|
41 |
+
:::
|
42 |
+
|
43 |
+
## Related resources
|
44 |
+
|
45 |
+
- [How to trim messages](/docs/how_to/trim_messages/)
|
46 |
+
- [Memory guide](https://langchain-ai.github.io/langgraph/concepts/memory/) for information on implementing short-term and long-term memory in chat models using [LangGraph](https://langchain-ai.github.io/langgraph/).
|
langchain_md_files/concepts/chat_models.mdx
ADDED
@@ -0,0 +1,168 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Chat models
|
2 |
+
|
3 |
+
## Overview
|
4 |
+
|
5 |
+
Large Language Models (LLMs) are advanced machine learning models that excel in a wide range of language-related tasks such as text generation, translation, summarization, question answering, and more, without needing task-specific fine tuning for every scenario.
|
6 |
+
|
7 |
+
Modern LLMs are typically accessed through a chat model interface that takes a list of [messages](/docs/concepts/messages) as input and returns a [message](/docs/concepts/messages) as output.
|
8 |
+
|
9 |
+
The newest generation of chat models offer additional capabilities:
|
10 |
+
|
11 |
+
* [Tool calling](/docs/concepts/tool_calling): Many popular chat models offer a native [tool calling](/docs/concepts/tool_calling) API. This API allows developers to build rich applications that enable LLMs to interact with external services, APIs, and databases. Tool calling can also be used to extract structured information from unstructured data and perform various other tasks.
|
12 |
+
* [Structured output](/docs/concepts/structured_outputs): A technique to make a chat model respond in a structured format, such as JSON that matches a given schema.
|
13 |
+
* [Multimodality](/docs/concepts/multimodality): The ability to work with data other than text; for example, images, audio, and video.
|
14 |
+
|
15 |
+
## Features
|
16 |
+
|
17 |
+
LangChain provides a consistent interface for working with chat models from different providers while offering additional features for monitoring, debugging, and optimizing the performance of applications that use LLMs.
|
18 |
+
|
19 |
+
* Integrations with many chat model providers (e.g., Anthropic, OpenAI, Ollama, Microsoft Azure, Google Vertex, Amazon Bedrock, Hugging Face, Cohere, Groq). Please see [chat model integrations](/docs/integrations/chat/) for an up-to-date list of supported models.
|
20 |
+
* Use either LangChain's [messages](/docs/concepts/messages) format or OpenAI format.
|
21 |
+
* Standard [tool calling API](/docs/concepts/tool_calling): standard interface for binding tools to models, accessing tool call requests made by models, and sending tool results back to the model.
|
22 |
+
* Standard API for [structuring outputs](/docs/concepts/structured_outputs/#structured-output-method) via the `with_structured_output` method.
|
23 |
+
* Provides support for [async programming](/docs/concepts/async), [efficient batching](/docs/concepts/runnables/#optimized-parallel-execution-batch), [a rich streaming API](/docs/concepts/streaming).
|
24 |
+
* Integration with [LangSmith](https://docs.smith.langchain.com) for monitoring and debugging production-grade applications based on LLMs.
|
25 |
+
* Additional features like standardized [token usage](/docs/concepts/messages/#aimessage), [rate limiting](#rate-limiting), [caching](#caching) and more.
|
26 |
+
|
27 |
+
## Integrations
|
28 |
+
|
29 |
+
LangChain has many chat model integrations that allow you to use a wide variety of models from different providers.
|
30 |
+
|
31 |
+
These integrations are one of two types:
|
32 |
+
|
33 |
+
1. **Official models**: These are models that are officially supported by LangChain and/or model provider. You can find these models in the `langchain-<provider>` packages.
|
34 |
+
2. **Community models**: There are models that are mostly contributed and supported by the community. You can find these models in the `langchain-community` package.
|
35 |
+
|
36 |
+
LangChain chat models are named with a convention that prefixes "Chat" to their class names (e.g., `ChatOllama`, `ChatAnthropic`, `ChatOpenAI`, etc.).
|
37 |
+
|
38 |
+
Please review the [chat model integrations](/docs/integrations/chat/) for a list of supported models.
|
39 |
+
|
40 |
+
:::note
|
41 |
+
Models that do **not** include the prefix "Chat" in their name or include "LLM" as a suffix in their name typically refer to older models that do not follow the chat model interface and instead use an interface that takes a string as input and returns a string as output.
|
42 |
+
:::
|
43 |
+
|
44 |
+
|
45 |
+
## Interface
|
46 |
+
|
47 |
+
LangChain chat models implement the [BaseChatModel](https://python.langchain.com/api_reference/core/language_models/langchain_core.language_models.chat_models.BaseChatModel.html) interface. Because `BaseChatModel` also implements the [Runnable Interface](/docs/concepts/runnables), chat models support a [standard streaming interface](/docs/concepts/streaming), [async programming](/docs/concepts/async), optimized [batching](/docs/concepts/runnables/#optimized-parallel-execution-batch), and more. Please see the [Runnable Interface](/docs/concepts/runnables) for more details.
|
48 |
+
|
49 |
+
Many of the key methods of chat models operate on [messages](/docs/concepts/messages) as input and return messages as output.
|
50 |
+
|
51 |
+
Chat models offer a standard set of parameters that can be used to configure the model. These parameters are typically used to control the behavior of the model, such as the temperature of the output, the maximum number of tokens in the response, and the maximum time to wait for a response. Please see the [standard parameters](#standard-parameters) section for more details.
|
52 |
+
|
53 |
+
:::note
|
54 |
+
In documentation, we will often use the terms "LLM" and "Chat Model" interchangeably. This is because most modern LLMs are exposed to users via a chat model interface.
|
55 |
+
|
56 |
+
However, LangChain also has implementations of older LLMs that do not follow the chat model interface and instead use an interface that takes a string as input and returns a string as output. These models are typically named without the "Chat" prefix (e.g., `Ollama`, `Anthropic`, `OpenAI`, etc.).
|
57 |
+
These models implement the [BaseLLM](https://python.langchain.com/api_reference/core/language_models/langchain_core.language_models.llms.BaseLLM.html#langchain_core.language_models.llms.BaseLLM) interface and may be named with the "LLM" suffix (e.g., `OllamaLLM`, `AnthropicLLM`, `OpenAILLM`, etc.). Generally, users should not use these models.
|
58 |
+
:::
|
59 |
+
|
60 |
+
### Key methods
|
61 |
+
|
62 |
+
The key methods of a chat model are:
|
63 |
+
|
64 |
+
1. **invoke**: The primary method for interacting with a chat model. It takes a list of [messages](/docs/concepts/messages) as input and returns a list of messages as output.
|
65 |
+
2. **stream**: A method that allows you to stream the output of a chat model as it is generated.
|
66 |
+
3. **batch**: A method that allows you to batch multiple requests to a chat model together for more efficient processing.
|
67 |
+
4. **bind_tools**: A method that allows you to bind a tool to a chat model for use in the model's execution context.
|
68 |
+
5. **with_structured_output**: A wrapper around the `invoke` method for models that natively support [structured output](/docs/concepts/structured_outputs).
|
69 |
+
|
70 |
+
Other important methods can be found in the [BaseChatModel API Reference](https://python.langchain.com/api_reference/core/language_models/langchain_core.language_models.chat_models.BaseChatModel.html).
|
71 |
+
|
72 |
+
### Inputs and outputs
|
73 |
+
|
74 |
+
Modern LLMs are typically accessed through a chat model interface that takes [messages](/docs/concepts/messages) as input and returns [messages](/docs/concepts/messages) as output. Messages are typically associated with a role (e.g., "system", "human", "assistant") and one or more content blocks that contain text or potentially multimodal data (e.g., images, audio, video).
|
75 |
+
|
76 |
+
LangChain supports two message formats to interact with chat models:
|
77 |
+
|
78 |
+
1. **LangChain Message Format**: LangChain's own message format, which is used by default and is used internally by LangChain.
|
79 |
+
2. **OpenAI's Message Format**: OpenAI's message format.
|
80 |
+
|
81 |
+
### Standard parameters
|
82 |
+
|
83 |
+
Many chat models have standardized parameters that can be used to configure the model:
|
84 |
+
|
85 |
+
| Parameter | Description |
|
86 |
+
|----------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
87 |
+
| `model` | The name or identifier of the specific AI model you want to use (e.g., `"gpt-3.5-turbo"` or `"gpt-4"`). |
|
88 |
+
| `temperature` | Controls the randomness of the model's output. A higher value (e.g., 1.0) makes responses more creative, while a lower value (e.g., 0.0) makes them more deterministic and focused. |
|
89 |
+
| `timeout` | The maximum time (in seconds) to wait for a response from the model before canceling the request. Ensures the request doesn’t hang indefinitely. |
|
90 |
+
| `max_tokens` | Limits the total number of tokens (words and punctuation) in the response. This controls how long the output can be. |
|
91 |
+
| `stop` | Specifies stop sequences that indicate when the model should stop generating tokens. For example, you might use specific strings to signal the end of a response. |
|
92 |
+
| `max_retries` | The maximum number of attempts the system will make to resend a request if it fails due to issues like network timeouts or rate limits. |
|
93 |
+
| `api_key` | The API key required for authenticating with the model provider. This is usually issued when you sign up for access to the model. |
|
94 |
+
| `base_url` | The URL of the API endpoint where requests are sent. This is typically provided by the model's provider and is necessary for directing your requests. |
|
95 |
+
| `rate_limiter` | An optional [BaseRateLimiter](https://python.langchain.com/api_reference/core/rate_limiters/langchain_core.rate_limiters.BaseRateLimiter.html#langchain_core.rate_limiters.BaseRateLimiter) to space out requests to avoid exceeding rate limits. See [rate-limiting](#rate-limiting) below for more details. |
|
96 |
+
|
97 |
+
Some important things to note:
|
98 |
+
|
99 |
+
- Standard parameters only apply to model providers that expose parameters with the intended functionality. For example, some providers do not expose a configuration for maximum output tokens, so max_tokens can't be supported on these.
|
100 |
+
- Standard parameters are currently only enforced on integrations that have their own integration packages (e.g. `langchain-openai`, `langchain-anthropic`, etc.), they're not enforced on models in `langchain-community`.
|
101 |
+
|
102 |
+
Chat models also accept other parameters that are specific to that integration. To find all the parameters supported by a Chat model head to the their respective [API reference](https://python.langchain.com/api_reference/) for that model.
|
103 |
+
|
104 |
+
## Tool calling
|
105 |
+
|
106 |
+
Chat models can call [tools](/docs/concepts/tools) to perform tasks such as fetching data from a database, making API requests, or running custom code. Please
|
107 |
+
see the [tool calling](/docs/concepts/tool_calling) guide for more information.
|
108 |
+
|
109 |
+
## Structured outputs
|
110 |
+
|
111 |
+
Chat models can be requested to respond in a particular format (e.g., JSON or matching a particular schema). This feature is extremely
|
112 |
+
useful for information extraction tasks. Please read more about
|
113 |
+
the technique in the [structured outputs](/docs/concepts/structured_outputs) guide.
|
114 |
+
|
115 |
+
## Multimodality
|
116 |
+
|
117 |
+
Large Language Models (LLMs) are not limited to processing text. They can also be used to process other types of data, such as images, audio, and video. This is known as [multimodality](/docs/concepts/multimodality).
|
118 |
+
|
119 |
+
Currently, only some LLMs support multimodal inputs, and almost none support multimodal outputs. Please consult the specific model documentation for details.
|
120 |
+
|
121 |
+
## Context window
|
122 |
+
|
123 |
+
A chat model's context window refers to the maximum size of the input sequence the model can process at one time. While the context windows of modern LLMs are quite large, they still present a limitation that developers must keep in mind when working with chat models.
|
124 |
+
|
125 |
+
If the input exceeds the context window, the model may not be able to process the entire input and could raise an error. In conversational applications, this is especially important because the context window determines how much information the model can "remember" throughout a conversation. Developers often need to manage the input within the context window to maintain a coherent dialogue without exceeding the limit. For more details on handling memory in conversations, refer to the [memory](https://langchain-ai.github.io/langgraph/concepts/memory/).
|
126 |
+
|
127 |
+
The size of the input is measured in [tokens](/docs/concepts/tokens) which are the unit of processing that the model uses.
|
128 |
+
|
129 |
+
## Advanced topics
|
130 |
+
|
131 |
+
### Rate-limiting
|
132 |
+
|
133 |
+
Many chat model providers impose a limit on the number of requests that can be made in a given time period.
|
134 |
+
|
135 |
+
If you hit a rate limit, you will typically receive a rate limit error response from the provider, and will need to wait before making more requests.
|
136 |
+
|
137 |
+
You have a few options to deal with rate limits:
|
138 |
+
|
139 |
+
1. Try to avoid hitting rate limits by spacing out requests: Chat models accept a `rate_limiter` parameter that can be provided during initialization. This parameter is used to control the rate at which requests are made to the model provider. Spacing out the requests to a given model is a particularly useful strategy when benchmarking models to evaluate their performance. Please see the [how to handle rate limits](/docs/how_to/chat_model_rate_limiting/) for more information on how to use this feature.
|
140 |
+
2. Try to recover from rate limit errors: If you receive a rate limit error, you can wait a certain amount of time before retrying the request. The amount of time to wait can be increased with each subsequent rate limit error. Chat models have a `max_retries` parameter that can be used to control the number of retries. See the [standard parameters](#standard-parameters) section for more information.
|
141 |
+
3. Fallback to another chat model: If you hit a rate limit with one chat model, you can switch to another chat model that is not rate-limited.
|
142 |
+
|
143 |
+
### Caching
|
144 |
+
|
145 |
+
Chat model APIs can be slow, so a natural question is whether to cache the results of previous conversations. Theoretically, caching can help improve performance by reducing the number of requests made to the model provider. In practice, caching chat model responses is a complex problem and should be approached with caution.
|
146 |
+
|
147 |
+
The reason is that getting a cache hit is unlikely after the first or second interaction in a conversation if relying on caching the **exact** inputs into the model. For example, how likely do you think that multiple conversations start with the exact same message? What about the exact same three messages?
|
148 |
+
|
149 |
+
An alternative approach is to use semantic caching, where you cache responses based on the meaning of the input rather than the exact input itself. This can be effective in some situations, but not in others.
|
150 |
+
|
151 |
+
A semantic cache introduces a dependency on another model on the critical path of your application (e.g., the semantic cache may rely on an [embedding model](/docs/concepts/embedding_models) to convert text to a vector representation), and it's not guaranteed to capture the meaning of the input accurately.
|
152 |
+
|
153 |
+
However, there might be situations where caching chat model responses is beneficial. For example, if you have a chat model that is used to answer frequently asked questions, caching responses can help reduce the load on the model provider, costs, and improve response times.
|
154 |
+
|
155 |
+
Please see the [how to cache chat model responses](/docs/how_to/chat_model_caching/) guide for more details.
|
156 |
+
|
157 |
+
## Related resources
|
158 |
+
|
159 |
+
* How-to guides on using chat models: [how-to guides](/docs/how_to/#chat-models).
|
160 |
+
* List of supported chat models: [chat model integrations](/docs/integrations/chat/).
|
161 |
+
|
162 |
+
### Conceptual guides
|
163 |
+
|
164 |
+
* [Messages](/docs/concepts/messages)
|
165 |
+
* [Tool calling](/docs/concepts/tool_calling)
|
166 |
+
* [Multimodality](/docs/concepts/multimodality)
|
167 |
+
* [Structured outputs](/docs/concepts/structured_outputs)
|
168 |
+
* [Tokens](/docs/concepts/tokens)
|
langchain_md_files/concepts/document_loaders.mdx
ADDED
@@ -0,0 +1,45 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Document loaders
|
2 |
+
<span data-heading-keywords="document loader,document loaders"></span>
|
3 |
+
|
4 |
+
:::info[Prerequisites]
|
5 |
+
|
6 |
+
* [Document loaders API reference](/docs/how_to/#document-loaders)
|
7 |
+
:::
|
8 |
+
|
9 |
+
Document loaders are designed to load document objects. LangChain has hundreds of integrations with various data sources to load data from: Slack, Notion, Google Drive, etc.
|
10 |
+
|
11 |
+
## Integrations
|
12 |
+
|
13 |
+
You can find available integrations on the [Document loaders integrations page](/docs/integrations/document_loaders/).
|
14 |
+
|
15 |
+
## Interface
|
16 |
+
|
17 |
+
Documents loaders implement the [BaseLoader interface](https://python.langchain.com/api_reference/core/document_loaders/langchain_core.document_loaders.base.BaseLoader.html).
|
18 |
+
|
19 |
+
Each DocumentLoader has its own specific parameters, but they can all be invoked in the same way with the `.load` method or `.lazy_load`.
|
20 |
+
|
21 |
+
Here's a simple example:
|
22 |
+
|
23 |
+
```python
|
24 |
+
from langchain_community.document_loaders.csv_loader import CSVLoader
|
25 |
+
|
26 |
+
loader = CSVLoader(
|
27 |
+
... # <-- Integration specific parameters here
|
28 |
+
)
|
29 |
+
data = loader.load()
|
30 |
+
```
|
31 |
+
|
32 |
+
When working with large datasets, you can use the `.lazy_load` method:
|
33 |
+
|
34 |
+
```python
|
35 |
+
for document in loader.lazy_load():
|
36 |
+
print(document)
|
37 |
+
```
|
38 |
+
|
39 |
+
## Related resources
|
40 |
+
|
41 |
+
Please see the following resources for more information:
|
42 |
+
|
43 |
+
* [How-to guides for document loaders](/docs/how_to/#document-loaders)
|
44 |
+
* [Document API reference](https://python.langchain.com/api_reference/core/documents/langchain_core.documents.base.Document.html)
|
45 |
+
* [Document loaders integrations](/docs/integrations/document_loaders/)
|
langchain_md_files/concepts/embedding_models.mdx
ADDED
@@ -0,0 +1,130 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Embedding models
|
2 |
+
<span data-heading-keywords="embedding,embeddings"></span>
|
3 |
+
|
4 |
+
:::info[Prerequisites]
|
5 |
+
|
6 |
+
* [Documents](https://python.langchain.com/api_reference/core/documents/langchain_core.documents.base.Document.html)
|
7 |
+
|
8 |
+
:::
|
9 |
+
|
10 |
+
:::info[Note]
|
11 |
+
This conceptual overview focuses on text-based embedding models.
|
12 |
+
|
13 |
+
Embedding models can also be [multimodal](/docs/concepts/multimodality) though such models are not currently supported by LangChain.
|
14 |
+
:::
|
15 |
+
|
16 |
+
Imagine being able to capture the essence of any text - a tweet, document, or book - in a single, compact representation.
|
17 |
+
This is the power of embedding models, which lie at the heart of many retrieval systems.
|
18 |
+
Embedding models transform human language into a format that machines can understand and compare with speed and accuracy.
|
19 |
+
These models take text as input and produce a fixed-length array of numbers, a numerical fingerprint of the text's semantic meaning.
|
20 |
+
Embeddings allow search system to find relevant documents not just based on keyword matches, but on semantic understanding.
|
21 |
+
|
22 |
+
## Key concepts
|
23 |
+
|
24 |
+

|
25 |
+
|
26 |
+
(1) **Embed text as a vector**: Embeddings transform text into a numerical vector representation.
|
27 |
+
|
28 |
+
(2) **Measure similarity**: Embedding vectors can be compared using simple mathematical operations.
|
29 |
+
|
30 |
+
## Embedding
|
31 |
+
|
32 |
+
### Historical context
|
33 |
+
|
34 |
+
The landscape of embedding models has evolved significantly over the years.
|
35 |
+
A pivotal moment came in 2018 when Google introduced [BERT (Bidirectional Encoder Representations from Transformers)](https://www.nvidia.com/en-us/glossary/bert/).
|
36 |
+
BERT applied transformer models to embed text as a simple vector representation, which lead to unprecedented performance across various NLP tasks.
|
37 |
+
However, BERT wasn't optimized for generating sentence embeddings efficiently.
|
38 |
+
This limitation spurred the creation of [SBERT (Sentence-BERT)](https://www.sbert.net/examples/training/sts/README.html), which adapted the BERT architecture to generate semantically rich sentence embeddings, easily comparable via similarity metrics like cosine similarity, dramatically reduced the computational overhead for tasks like finding similar sentences.
|
39 |
+
Today, the embedding model ecosystem is diverse, with numerous providers offering their own implementations.
|
40 |
+
To navigate this variety, researchers and practitioners often turn to benchmarks like the Massive Text Embedding Benchmark (MTEB) [here](https://huggingface.co/blog/mteb) for objective comparisons.
|
41 |
+
|
42 |
+
:::info[Further reading]
|
43 |
+
|
44 |
+
* See the [seminal BERT paper](https://arxiv.org/abs/1810.04805).
|
45 |
+
* See Cameron Wolfe's [excellent review](https://cameronrwolfe.substack.com/p/the-basics-of-ai-powered-vector-search?utm_source=profile&utm_medium=reader2) of embedding models.
|
46 |
+
* See the [Massive Text Embedding Benchmark (MTEB)](https://huggingface.co/blog/mteb) leaderboard for a comprehensive overview of embedding models.
|
47 |
+
|
48 |
+
:::
|
49 |
+
|
50 |
+
### Interface
|
51 |
+
|
52 |
+
LangChain provides a universal interface for working with them, providing standard methods for common operations.
|
53 |
+
This common interface simplifies interaction with various embedding providers through two central methods:
|
54 |
+
|
55 |
+
- `embed_documents`: For embedding multiple texts (documents)
|
56 |
+
- `embed_query`: For embedding a single text (query)
|
57 |
+
|
58 |
+
This distinction is important, as some providers employ different embedding strategies for documents (which are to be searched) versus queries (the search input itself).
|
59 |
+
To illustrate, here's a practical example using LangChain's `.embed_documents` method to embed a list of strings:
|
60 |
+
|
61 |
+
```python
|
62 |
+
from langchain_openai import OpenAIEmbeddings
|
63 |
+
embeddings_model = OpenAIEmbeddings()
|
64 |
+
embeddings = embeddings_model.embed_documents(
|
65 |
+
[
|
66 |
+
"Hi there!",
|
67 |
+
"Oh, hello!",
|
68 |
+
"What's your name?",
|
69 |
+
"My friends call me World",
|
70 |
+
"Hello World!"
|
71 |
+
]
|
72 |
+
)
|
73 |
+
len(embeddings), len(embeddings[0])
|
74 |
+
(5, 1536)
|
75 |
+
```
|
76 |
+
|
77 |
+
For convenience, you can also use the `embed_query` method to embed a single text:
|
78 |
+
|
79 |
+
```python
|
80 |
+
query_embedding = embeddings_model.embed_query("What is the meaning of life?")
|
81 |
+
```
|
82 |
+
|
83 |
+
:::info[Further reading]
|
84 |
+
|
85 |
+
* See the full list of [LangChain embedding model integrations](/docs/integrations/text_embedding/).
|
86 |
+
* See these [how-to guides](/docs/how_to/embed_text) for working with embedding models.
|
87 |
+
|
88 |
+
:::
|
89 |
+
|
90 |
+
### Integrations
|
91 |
+
|
92 |
+
LangChain offers many embedding model integrations which you can find [on the embedding models](/docs/integrations/text_embedding/) integrations page.
|
93 |
+
|
94 |
+
## Measure similarity
|
95 |
+
|
96 |
+
Each embedding is essentially a set of coordinates, often in a high-dimensional space.
|
97 |
+
In this space, the position of each point (embedding) reflects the meaning of its corresponding text.
|
98 |
+
Just as similar words might be close to each other in a thesaurus, similar concepts end up close to each other in this embedding space.
|
99 |
+
This allows for intuitive comparisons between different pieces of text.
|
100 |
+
By reducing text to these numerical representations, we can use simple mathematical operations to quickly measure how alike two pieces of text are, regardless of their original length or structure.
|
101 |
+
Some common similarity metrics include:
|
102 |
+
|
103 |
+
- **Cosine Similarity**: Measures the cosine of the angle between two vectors.
|
104 |
+
- **Euclidean Distance**: Measures the straight-line distance between two points.
|
105 |
+
- **Dot Product**: Measures the projection of one vector onto another.
|
106 |
+
|
107 |
+
The choice of similarity metric should be chosen based on the model.
|
108 |
+
As an example, [OpenAI suggests cosine similarity for their embeddings](https://platform.openai.com/docs/guides/embeddings/which-distance-function-should-i-use), which can be easily implemented:
|
109 |
+
|
110 |
+
```python
|
111 |
+
import numpy as np
|
112 |
+
|
113 |
+
def cosine_similarity(vec1, vec2):
|
114 |
+
dot_product = np.dot(vec1, vec2)
|
115 |
+
norm_vec1 = np.linalg.norm(vec1)
|
116 |
+
norm_vec2 = np.linalg.norm(vec2)
|
117 |
+
return dot_product / (norm_vec1 * norm_vec2)
|
118 |
+
|
119 |
+
similarity = cosine_similarity(query_result, document_result)
|
120 |
+
print("Cosine Similarity:", similarity)
|
121 |
+
```
|
122 |
+
|
123 |
+
:::info[Further reading]
|
124 |
+
|
125 |
+
* See Simon Willison’s [nice blog post and video](https://simonwillison.net/2023/Oct/23/embeddings/) on embeddings and similarity metrics.
|
126 |
+
* See [this documentation](https://developers.google.com/machine-learning/clustering/dnn-clustering/supervised-similarity) from Google on similarity metrics to consider with embeddings.
|
127 |
+
* See Pinecone's [blog post](https://www.pinecone.io/learn/vector-similarity/) on similarity metrics.
|
128 |
+
* See OpenAI's [FAQ](https://platform.openai.com/docs/guides/embeddings/faq) on what similarity metric to use with OpenAI embeddings.
|
129 |
+
|
130 |
+
:::
|
langchain_md_files/concepts/evaluation.mdx
ADDED
@@ -0,0 +1,17 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Evaluation
|
2 |
+
<span data-heading-keywords="evaluation,evaluate"></span>
|
3 |
+
|
4 |
+
Evaluation is the process of assessing the performance and effectiveness of your LLM-powered applications.
|
5 |
+
It involves testing the model's responses against a set of predefined criteria or benchmarks to ensure it meets the desired quality standards and fulfills the intended purpose.
|
6 |
+
This process is vital for building reliable applications.
|
7 |
+
|
8 |
+

|
9 |
+
|
10 |
+
[LangSmith](https://docs.smith.langchain.com/) helps with this process in a few ways:
|
11 |
+
|
12 |
+
- It makes it easier to create and curate datasets via its tracing and annotation features
|
13 |
+
- It provides an evaluation framework that helps you define metrics and run your app against your dataset
|
14 |
+
- It allows you to track results over time and automatically run your evaluators on a schedule or as part of CI/Code
|
15 |
+
|
16 |
+
To learn more, check out [this LangSmith guide](https://docs.smith.langchain.com/concepts/evaluation).
|
17 |
+
|
langchain_md_files/concepts/example_selectors.mdx
ADDED
@@ -0,0 +1,20 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Example selectors
|
2 |
+
|
3 |
+
:::note Prerequisites
|
4 |
+
|
5 |
+
- [Chat models](/docs/concepts/chat_models/)
|
6 |
+
- [Few-shot prompting](/docs/concepts/few_shot_prompting/)
|
7 |
+
:::
|
8 |
+
|
9 |
+
## Overview
|
10 |
+
|
11 |
+
One common prompting technique for achieving better performance is to include examples as part of the prompt. This is known as [few-shot prompting](/docs/concepts/few_shot_prompting).
|
12 |
+
|
13 |
+
This gives the [language model](/docs/concepts/chat_models/) concrete examples of how it should behave.
|
14 |
+
Sometimes these examples are hardcoded into the prompt, but for more advanced situations it may be nice to dynamically select them.
|
15 |
+
|
16 |
+
**Example Selectors** are classes responsible for selecting and then formatting examples into prompts.
|
17 |
+
|
18 |
+
## Related resources
|
19 |
+
|
20 |
+
* [Example selector how-to guides](/docs/how_to/#example-selectors)
|
langchain_md_files/concepts/few_shot_prompting.mdx
ADDED
@@ -0,0 +1,85 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Few-shot prompting
|
2 |
+
|
3 |
+
:::note Prerequisites
|
4 |
+
|
5 |
+
- [Chat models](/docs/concepts/chat_models/)
|
6 |
+
:::
|
7 |
+
|
8 |
+
## Overview
|
9 |
+
|
10 |
+
One of the most effective ways to improve model performance is to give a model examples of
|
11 |
+
what you want it to do. The technique of adding example inputs and expected outputs
|
12 |
+
to a model prompt is known as "few-shot prompting". The technique is based on the
|
13 |
+
[Language Models are Few-Shot Learners](https://arxiv.org/abs/2005.14165) paper.
|
14 |
+
There are a few things to think about when doing few-shot prompting:
|
15 |
+
|
16 |
+
1. How are examples generated?
|
17 |
+
2. How many examples are in each prompt?
|
18 |
+
3. How are examples selected at runtime?
|
19 |
+
4. How are examples formatted in the prompt?
|
20 |
+
|
21 |
+
Here are the considerations for each.
|
22 |
+
|
23 |
+
## 1. Generating examples
|
24 |
+
|
25 |
+
The first and most important step of few-shot prompting is coming up with a good dataset of examples. Good examples should be relevant at runtime, clear, informative, and provide information that was not already known to the model.
|
26 |
+
|
27 |
+
At a high-level, the basic ways to generate examples are:
|
28 |
+
- Manual: a person/people generates examples they think are useful.
|
29 |
+
- Better model: a better (presumably more expensive/slower) model's responses are used as examples for a worse (presumably cheaper/faster) model.
|
30 |
+
- User feedback: users (or labelers) leave feedback on interactions with the application and examples are generated based on that feedback (for example, all interactions with positive feedback could be turned into examples).
|
31 |
+
- LLM feedback: same as user feedback but the process is automated by having models evaluate themselves.
|
32 |
+
|
33 |
+
Which approach is best depends on your task. For tasks where a small number of core principles need to be understood really well, it can be valuable hand-craft a few really good examples.
|
34 |
+
For tasks where the space of correct behaviors is broader and more nuanced, it can be useful to generate many examples in a more automated fashion so that there's a higher likelihood of there being some highly relevant examples for any runtime input.
|
35 |
+
|
36 |
+
**Single-turn v.s. multi-turn examples**
|
37 |
+
|
38 |
+
Another dimension to think about when generating examples is what the example is actually showing.
|
39 |
+
|
40 |
+
The simplest types of examples just have a user input and an expected model output. These are single-turn examples.
|
41 |
+
|
42 |
+
One more complex type of example is where the example is an entire conversation, usually in which a model initially responds incorrectly and a user then tells the model how to correct its answer.
|
43 |
+
This is called a multi-turn example. Multi-turn examples can be useful for more nuanced tasks where it's useful to show common errors and spell out exactly why they're wrong and what should be done instead.
|
44 |
+
|
45 |
+
## 2. Number of examples
|
46 |
+
|
47 |
+
Once we have a dataset of examples, we need to think about how many examples should be in each prompt.
|
48 |
+
The key tradeoff is that more examples generally improve performance, but larger prompts increase costs and latency.
|
49 |
+
And beyond some threshold having too many examples can start to confuse the model.
|
50 |
+
Finding the right number of examples is highly dependent on the model, the task, the quality of the examples, and your cost and latency constraints.
|
51 |
+
Anecdotally, the better the model is the fewer examples it needs to perform well and the more quickly you hit steeply diminishing returns on adding more examples.
|
52 |
+
But, the best/only way to reliably answer this question is to run some experiments with different numbers of examples.
|
53 |
+
|
54 |
+
## 3. Selecting examples
|
55 |
+
|
56 |
+
Assuming we are not adding our entire example dataset into each prompt, we need to have a way of selecting examples from our dataset based on a given input. We can do this:
|
57 |
+
- Randomly
|
58 |
+
- By (semantic or keyword-based) similarity of the inputs
|
59 |
+
- Based on some other constraints, like token size
|
60 |
+
|
61 |
+
LangChain has a number of [`ExampleSelectors`](/docs/concepts/example_selectors) which make it easy to use any of these techniques.
|
62 |
+
|
63 |
+
Generally, selecting by semantic similarity leads to the best model performance. But how important this is is again model and task specific, and is something worth experimenting with.
|
64 |
+
|
65 |
+
## 4. Formatting examples
|
66 |
+
|
67 |
+
Most state-of-the-art models these days are chat models, so we'll focus on formatting examples for those. Our basic options are to insert the examples:
|
68 |
+
- In the system prompt as a string
|
69 |
+
- As their own messages
|
70 |
+
|
71 |
+
If we insert our examples into the system prompt as a string, we'll need to make sure it's clear to the model where each example begins and which parts are the input versus output. Different models respond better to different syntaxes, like [ChatML](https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/chat-markup-language), XML, TypeScript, etc.
|
72 |
+
|
73 |
+
If we insert our examples as messages, where each example is represented as a sequence of Human, AI messages, we might want to also assign [names](/docs/concepts/messages) to our messages like `"example_user"` and `"example_assistant"` to make it clear that these messages correspond to different actors than the latest input message.
|
74 |
+
|
75 |
+
**Formatting tool call examples**
|
76 |
+
|
77 |
+
One area where formatting examples as messages can be tricky is when our example outputs have tool calls. This is because different models have different constraints on what types of message sequences are allowed when any tool calls are generated.
|
78 |
+
- Some models require that any AIMessage with tool calls be immediately followed by ToolMessages for every tool call,
|
79 |
+
- Some models additionally require that any ToolMessages be immediately followed by an AIMessage before the next HumanMessage,
|
80 |
+
- Some models require that tools are passed into the model if there are any tool calls / ToolMessages in the chat history.
|
81 |
+
|
82 |
+
These requirements are model-specific and should be checked for the model you are using. If your model requires ToolMessages after tool calls and/or AIMessages after ToolMessages and your examples only include expected tool calls and not the actual tool outputs, you can try adding dummy ToolMessages / AIMessages to the end of each example with generic contents to satisfy the API constraints.
|
83 |
+
In these cases it's especially worth experimenting with inserting your examples as strings versus messages, as having dummy messages can adversely affect certain models.
|
84 |
+
|
85 |
+
You can see a case study of how Anthropic and OpenAI respond to different few-shot prompting techniques on two different tool calling benchmarks [here](https://blog.langchain.dev/few-shot-prompting-to-improve-tool-calling-performance/).
|
langchain_md_files/concepts/index.mdx
ADDED
@@ -0,0 +1,95 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Conceptual guide
|
2 |
+
|
3 |
+
This guide provides explanations of the key concepts behind the LangChain framework and AI applications more broadly.
|
4 |
+
|
5 |
+
We recommend that you go through at least one of the [Tutorials](/docs/tutorials) before diving into the conceptual guide. This will provide practical context that will make it easier to understand the concepts discussed here.
|
6 |
+
|
7 |
+
The conceptual guide does not cover step-by-step instructions or specific implementation examples — those are found in the [How-to guides](/docs/how_to/) and [Tutorials](/docs/tutorials). For detailed reference material, please see the [API reference](https://python.langchain.com/api_reference/).
|
8 |
+
|
9 |
+
## High level
|
10 |
+
|
11 |
+
- **[Why LangChain?](/docs/concepts/why_langchain)**: Overview of the value that LangChain provides.
|
12 |
+
- **[Architecture](/docs/concepts/architecture)**: How packages are organized in the LangChain ecosystem.
|
13 |
+
|
14 |
+
## Concepts
|
15 |
+
|
16 |
+
- **[Chat models](/docs/concepts/chat_models)**: LLMs exposed via a chat API that process sequences of messages as input and output a message.
|
17 |
+
- **[Messages](/docs/concepts/messages)**: The unit of communication in chat models, used to represent model input and output.
|
18 |
+
- **[Chat history](/docs/concepts/chat_history)**: A conversation represented as a sequence of messages, alternating between user messages and model responses.
|
19 |
+
- **[Tools](/docs/concepts/tools)**: A function with an associated schema defining the function's name, description, and the arguments it accepts.
|
20 |
+
- **[Tool calling](/docs/concepts/tool_calling)**: A type of chat model API that accepts tool schemas, along with messages, as input and returns invocations of those tools as part of the output message.
|
21 |
+
- **[Structured output](/docs/concepts/structured_outputs)**: A technique to make a chat model respond in a structured format, such as JSON that matches a given schema.
|
22 |
+
- **[Memory](https://langchain-ai.github.io/langgraph/concepts/memory/)**: Information about a conversation that is persisted so that it can be used in future conversations.
|
23 |
+
- **[Multimodality](/docs/concepts/multimodality)**: The ability to work with data that comes in different forms, such as text, audio, images, and video.
|
24 |
+
- **[Runnable interface](/docs/concepts/runnables)**: The base abstraction that many LangChain components and the LangChain Expression Language are built on.
|
25 |
+
- **[Streaming](/docs/concepts/streaming)**: LangChain streaming APIs for surfacing results as they are generated.
|
26 |
+
- **[LangChain Expression Language (LCEL)](/docs/concepts/lcel)**: A syntax for orchestrating LangChain components. Most useful for simpler applications.
|
27 |
+
- **[Document loaders](/docs/concepts/document_loaders)**: Load a source as a list of documents.
|
28 |
+
- **[Retrieval](/docs/concepts/retrieval)**: Information retrieval systems can retrieve structured or unstructured data from a datasource in response to a query.
|
29 |
+
- **[Text splitters](/docs/concepts/text_splitters)**: Split long text into smaller chunks that can be individually indexed to enable granular retrieval.
|
30 |
+
- **[Embedding models](/docs/concepts/embedding_models)**: Models that represent data such as text or images in a vector space.
|
31 |
+
- **[Vector stores](/docs/concepts/vectorstores)**: Storage of and efficient search over vectors and associated metadata.
|
32 |
+
- **[Retriever](/docs/concepts/retrievers)**: A component that returns relevant documents from a knowledge base in response to a query.
|
33 |
+
- **[Retrieval Augmented Generation (RAG)](/docs/concepts/rag)**: A technique that enhances language models by combining them with external knowledge bases.
|
34 |
+
- **[Agents](/docs/concepts/agents)**: Use a [language model](/docs/concepts/chat_models) to choose a sequence of actions to take. Agents can interact with external resources via [tool](/docs/concepts/tools).
|
35 |
+
- **[Prompt templates](/docs/concepts/prompt_templates)**: Component for factoring out the static parts of a model "prompt" (usually a sequence of messages). Useful for serializing, versioning, and reusing these static parts.
|
36 |
+
- **[Output parsers](/docs/concepts/output_parsers)**: Responsible for taking the output of a model and transforming it into a more suitable format for downstream tasks. Output parsers were primarily useful prior to the general availability of [tool calling](/docs/concepts/tool_calling) and [structured outputs](/docs/concepts/structured_outputs).
|
37 |
+
- **[Few-shot prompting](/docs/concepts/few_shot_prompting)**: A technique for improving model performance by providing a few examples of the task to perform in the prompt.
|
38 |
+
- **[Example selectors](/docs/concepts/example_selectors)**: Used to select the most relevant examples from a dataset based on a given input. Example selectors are used in few-shot prompting to select examples for a prompt.
|
39 |
+
- **[Async programming](/docs/concepts/async)**: The basics that one should know to use LangChain in an asynchronous context.
|
40 |
+
- **[Callbacks](/docs/concepts/callbacks)**: Callbacks enable the execution of custom auxiliary code in built-in components. Callbacks are used to stream outputs from LLMs in LangChain, trace the intermediate steps of an application, and more.
|
41 |
+
- **[Tracing](/docs/concepts/tracing)**: The process of recording the steps that an application takes to go from input to output. Tracing is essential for debugging and diagnosing issues in complex applications.
|
42 |
+
- **[Evaluation](/docs/concepts/evaluation)**: The process of assessing the performance and effectiveness of AI applications. This involves testing the model's responses against a set of predefined criteria or benchmarks to ensure it meets the desired quality standards and fulfills the intended purpose. This process is vital for building reliable applications.
|
43 |
+
- **[Testing](/docs/concepts/testing)**: The process of verifying that a component of an integration or application works as expected. Testing is essential for ensuring that the application behaves correctly and that changes to the codebase do not introduce new bugs.
|
44 |
+
|
45 |
+
## Glossary
|
46 |
+
|
47 |
+
- **[AIMessageChunk](/docs/concepts/messages#aimessagechunk)**: A partial response from an AI message. Used when streaming responses from a chat model.
|
48 |
+
- **[AIMessage](/docs/concepts/messages#aimessage)**: Represents a complete response from an AI model.
|
49 |
+
- **[astream_events](/docs/concepts/chat_models#key-methods)**: Stream granular information from [LCEL](/docs/concepts/lcel) chains.
|
50 |
+
- **[BaseTool](/docs/concepts/tools/#tool-interface)**: The base class for all tools in LangChain.
|
51 |
+
- **[batch](/docs/concepts/runnables)**: Use to execute a runnable with batch inputs.
|
52 |
+
- **[bind_tools](/docs/concepts/tool_calling/#tool-binding)**: Allows models to interact with tools.
|
53 |
+
- **[Caching](/docs/concepts/chat_models#caching)**: Storing results to avoid redundant calls to a chat model.
|
54 |
+
- **[Chat models](/docs/concepts/multimodality/#multimodality-in-chat-models)**: Chat models that handle multiple data modalities.
|
55 |
+
- **[Configurable runnables](/docs/concepts/runnables/#configurable-runnables)**: Creating configurable Runnables.
|
56 |
+
- **[Context window](/docs/concepts/chat_models#context-window)**: The maximum size of input a chat model can process.
|
57 |
+
- **[Conversation patterns](/docs/concepts/chat_history#conversation-patterns)**: Common patterns in chat interactions.
|
58 |
+
- **[Document](https://python.langchain.com/api_reference/core/documents/langchain_core.documents.base.Document.html)**: LangChain's representation of a document.
|
59 |
+
- **[Embedding models](/docs/concepts/multimodality/#multimodality-in-embedding-models)**: Models that generate vector embeddings for various data types.
|
60 |
+
- **[HumanMessage](/docs/concepts/messages#humanmessage)**: Represents a message from a human user.
|
61 |
+
- **[InjectedState](/docs/concepts/tools#injectedstate)**: A state injected into a tool function.
|
62 |
+
- **[InjectedStore](/docs/concepts/tools#injectedstore)**: A store that can be injected into a tool for data persistence.
|
63 |
+
- **[InjectedToolArg](/docs/concepts/tools#injectedtoolarg)**: Mechanism to inject arguments into tool functions.
|
64 |
+
- **[input and output types](/docs/concepts/runnables#input-and-output-types)**: Types used for input and output in Runnables.
|
65 |
+
- **[Integration packages](/docs/concepts/architecture/#integration-packages)**: Third-party packages that integrate with LangChain.
|
66 |
+
- **[Integration tests](/docs/concepts/testing#integration-tests)**: Tests that verify the correctness of the interaction between components, usually run with access to the underlying API that powers an integration.
|
67 |
+
- **[invoke](/docs/concepts/runnables)**: A standard method to invoke a Runnable.
|
68 |
+
- **[JSON mode](/docs/concepts/structured_outputs#json-mode)**: Returning responses in JSON format.
|
69 |
+
- **[langchain-community](/docs/concepts/architecture#langchain-community)**: Community-driven components for LangChain.
|
70 |
+
- **[langchain-core](/docs/concepts/architecture#langchain-core)**: Core langchain package. Includes base interfaces and in-memory implementations.
|
71 |
+
- **[langchain](/docs/concepts/architecture#langchain)**: A package for higher level components (e.g., some pre-built chains).
|
72 |
+
- **[langgraph](/docs/concepts/architecture#langgraph)**: Powerful orchestration layer for LangChain. Use to build complex pipelines and workflows.
|
73 |
+
- **[langserve](/docs/concepts/architecture#langserve)**: Used to deploy LangChain Runnables as REST endpoints. Uses FastAPI. Works primarily for LangChain Runnables, does not currently integrate with LangGraph.
|
74 |
+
- **[LLMs (legacy)](/docs/concepts/text_llms)**: Older language models that take a string as input and return a string as output.
|
75 |
+
- **[Managing chat history](/docs/concepts/chat_history#managing-chat-history)**: Techniques to maintain and manage the chat history.
|
76 |
+
- **[OpenAI format](/docs/concepts/messages#openai-format)**: OpenAI's message format for chat models.
|
77 |
+
- **[Propagation of RunnableConfig](/docs/concepts/runnables/#propagation-of-runnableconfig)**: Propagating configuration through Runnables. Read if working with python 3.9, 3.10 and async.
|
78 |
+
- **[rate-limiting](/docs/concepts/chat_models#rate-limiting)**: Client side rate limiting for chat models.
|
79 |
+
- **[RemoveMessage](/docs/concepts/messages/#removemessage)**: An abstraction used to remove a message from chat history, used primarily in LangGraph.
|
80 |
+
- **[role](/docs/concepts/messages#role)**: Represents the role (e.g., user, assistant) of a chat message.
|
81 |
+
- **[RunnableConfig](/docs/concepts/runnables/#runnableconfig)**: Use to pass run time information to Runnables (e.g., `run_name`, `run_id`, `tags`, `metadata`, `max_concurrency`, `recursion_limit`, `configurable`).
|
82 |
+
- **[Standard parameters for chat models](/docs/concepts/chat_models#standard-parameters)**: Parameters such as API key, `temperature`, and `max_tokens`.
|
83 |
+
- **[Standard tests](/docs/concepts/testing#standard-tests)**: A defined set of unit and integration tests that all integrations must pass.
|
84 |
+
- **[stream](/docs/concepts/streaming)**: Use to stream output from a Runnable or a graph.
|
85 |
+
- **[Tokenization](/docs/concepts/tokens)**: The process of converting data into tokens and vice versa.
|
86 |
+
- **[Tokens](/docs/concepts/tokens)**: The basic unit that a language model reads, processes, and generates under the hood.
|
87 |
+
- **[Tool artifacts](/docs/concepts/tools#tool-artifacts)**: Add artifacts to the output of a tool that will not be sent to the model, but will be available for downstream processing.
|
88 |
+
- **[Tool binding](/docs/concepts/tool_calling#tool-binding)**: Binding tools to models.
|
89 |
+
- **[@tool](/docs/concepts/tools/#create-tools-using-the-tool-decorator)**: Decorator for creating tools in LangChain.
|
90 |
+
- **[Toolkits](/docs/concepts/tools#toolkits)**: A collection of tools that can be used together.
|
91 |
+
- **[ToolMessage](/docs/concepts/messages#toolmessage)**: Represents a message that contains the results of a tool execution.
|
92 |
+
- **[Unit tests](/docs/concepts/testing#unit-tests)**: Tests that verify the correctness of individual components, run in isolation without access to the Internet.
|
93 |
+
- **[Vector stores](/docs/concepts/vectorstores)**: Datastores specialized for storing and efficiently searching vector embeddings.
|
94 |
+
- **[with_structured_output](/docs/concepts/structured_outputs/#structured-output-method)**: A helper method for chat models that natively support [tool calling](/docs/concepts/tool_calling) to get structured output matching a given schema specified via Pydantic, JSON schema or a function.
|
95 |
+
- **[with_types](/docs/concepts/runnables#with_types)**: Method to overwrite the input and output types of a runnable. Useful when working with complex LCEL chains and deploying with LangServe.
|
langchain_md_files/concepts/key_value_stores.mdx
ADDED
@@ -0,0 +1,38 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Key-value stores
|
2 |
+
|
3 |
+
## Overview
|
4 |
+
|
5 |
+
LangChain provides a key-value store interface for storing and retrieving data.
|
6 |
+
|
7 |
+
LangChain includes a [`BaseStore`](https://python.langchain.com/api_reference/core/stores/langchain_core.stores.BaseStore.html) interface,
|
8 |
+
which allows for storage of arbitrary data. However, LangChain components that require KV-storage accept a
|
9 |
+
more specific `BaseStore[str, bytes]` instance that stores binary data (referred to as a `ByteStore`), and internally take care of
|
10 |
+
encoding and decoding data for their specific needs.
|
11 |
+
|
12 |
+
This means that as a user, you only need to think about one type of store rather than different ones for different types of data.
|
13 |
+
|
14 |
+
## Usage
|
15 |
+
|
16 |
+
The key-value store interface in LangChain is used primarily for:
|
17 |
+
|
18 |
+
1. Caching [embeddings](/docs/concepts/embedding_models) via [CachedBackedEmbeddings](https://python.langchain.com/api_reference/langchain/embeddings/langchain.embeddings.cache.CacheBackedEmbeddings.html#langchain.embeddings.cache.CacheBackedEmbeddings) to avoid recomputing embeddings for repeated queries or when re-indexing content.
|
19 |
+
|
20 |
+
2. As a simple [Document](https://python.langchain.com/api_reference/core/documents/langchain_core.documents.base.Document.html#langchain_core.documents.base.Document) persistence layer in some retrievers.
|
21 |
+
|
22 |
+
Please see these how-to guides for more information:
|
23 |
+
|
24 |
+
* [How to cache embeddings guide](/docs/how_to/caching_embeddings/).
|
25 |
+
* [How to retriever using multiple vectors per document](/docs/how_to/custom_retriever/).
|
26 |
+
|
27 |
+
## Interface
|
28 |
+
|
29 |
+
All [`BaseStores`](https://python.langchain.com/api_reference/core/stores/langchain_core.stores.BaseStore.html) support the following interface. Note that the interface allows for modifying **multiple** key-value pairs at once:
|
30 |
+
|
31 |
+
- `mget(key: Sequence[str]) -> List[Optional[bytes]]`: get the contents of multiple keys, returning `None` if the key does not exist
|
32 |
+
- `mset(key_value_pairs: Sequence[Tuple[str, bytes]]) -> None`: set the contents of multiple keys
|
33 |
+
- `mdelete(key: Sequence[str]) -> None`: delete multiple keys
|
34 |
+
- `yield_keys(prefix: Optional[str] = None) -> Iterator[str]`: yield all keys in the store, optionally filtering by a prefix
|
35 |
+
|
36 |
+
## Integrations
|
37 |
+
|
38 |
+
Please reference the [stores integration page](/docs/integrations/stores/) for a list of available key-value store integrations.
|
langchain_md_files/concepts/lcel.mdx
ADDED
@@ -0,0 +1,221 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# LangChain Expression Language (LCEL)
|
2 |
+
|
3 |
+
:::info Prerequisites
|
4 |
+
* [Runnable Interface](/docs/concepts/runnables)
|
5 |
+
:::
|
6 |
+
|
7 |
+
The **L**ang**C**hain **E**xpression **L**anguage (LCEL) takes a [declarative](https://en.wikipedia.org/wiki/Declarative_programming) approach to building new [Runnables](/docs/concepts/runnables) from existing Runnables.
|
8 |
+
|
9 |
+
This means that you describe what *should* happen, rather than *how* it should happen, allowing LangChain to optimize the run-time execution of the chains.
|
10 |
+
|
11 |
+
We often refer to a `Runnable` created using LCEL as a "chain". It's important to remember that a "chain" is `Runnable` and it implements the full [Runnable Interface](/docs/concepts/runnables).
|
12 |
+
|
13 |
+
:::note
|
14 |
+
* The [LCEL cheatsheet](/docs/how_to/lcel_cheatsheet/) shows common patterns that involve the Runnable interface and LCEL expressions.
|
15 |
+
* Please see the following list of [how-to guides](/docs/how_to/#langchain-expression-language-lcel) that cover common tasks with LCEL.
|
16 |
+
* A list of built-in `Runnables` can be found in the [LangChain Core API Reference](https://python.langchain.com/api_reference/core/runnables.html). Many of these Runnables are useful when composing custom "chains" in LangChain using LCEL.
|
17 |
+
:::
|
18 |
+
|
19 |
+
## Benefits of LCEL
|
20 |
+
|
21 |
+
LangChain optimizes the run-time execution of chains built with LCEL in a number of ways:
|
22 |
+
|
23 |
+
- **Optimized parallel execution**: Run Runnables in parallel using [RunnableParallel](#runnableparallel) or run multiple inputs through a given chain in parallel using the [Runnable Batch API](/docs/concepts/runnables/#optimized-parallel-execution-batch). Parallel execution can significantly reduce the latency as processing can be done in parallel instead of sequentially.
|
24 |
+
- **Guaranteed Async support**: Any chain built with LCEL can be run asynchronously using the [Runnable Async API](/docs/concepts/runnables/#asynchronous-support). This can be useful when running chains in a server environment where you want to handle large number of requests concurrently.
|
25 |
+
- **Simplify streaming**: LCEL chains can be streamed, allowing for incremental output as the chain is executed. LangChain can optimize the streaming of the output to minimize the time-to-first-token(time elapsed until the first chunk of output from a [chat model](/docs/concepts/chat_models) or [llm](/docs/concepts/text_llms) comes out).
|
26 |
+
|
27 |
+
Other benefits include:
|
28 |
+
|
29 |
+
- [**Seamless LangSmith tracing**](https://docs.smith.langchain.com)
|
30 |
+
As your chains get more and more complex, it becomes increasingly important to understand what exactly is happening at every step.
|
31 |
+
With LCEL, **all** steps are automatically logged to [LangSmith](https://docs.smith.langchain.com/) for maximum observability and debuggability.
|
32 |
+
- **Standard API**: Because all chains are built using the Runnable interface, they can be used in the same way as any other Runnable.
|
33 |
+
- [**Deployable with LangServe**](/docs/concepts/architecture#langserve): Chains built with LCEL can be deployed using for production use.
|
34 |
+
|
35 |
+
## Should I use LCEL?
|
36 |
+
|
37 |
+
LCEL is an [orchestration solution](https://en.wikipedia.org/wiki/Orchestration_(computing)) -- it allows LangChain to handle run-time execution of chains in an optimized way.
|
38 |
+
|
39 |
+
While we have seen users run chains with hundreds of steps in production, we generally recommend using LCEL for simpler orchestration tasks. When the application requires complex state management, branching, cycles or multiple agents, we recommend that users take advantage of [LangGraph](/docs/concepts/architecture#langgraph).
|
40 |
+
|
41 |
+
In LangGraph, users define graphs that specify the application's flow. This allows users to keep using LCEL within individual nodes when LCEL is needed, while making it easy to define complex orchestration logic that is more readable and maintainable.
|
42 |
+
|
43 |
+
Here are some guidelines:
|
44 |
+
|
45 |
+
* If you are making a single LLM call, you don't need LCEL; instead call the underlying [chat model](/docs/concepts/chat_models) directly.
|
46 |
+
* If you have a simple chain (e.g., prompt + llm + parser, simple retrieval set up etc.), LCEL is a reasonable fit, if you're taking advantage of the LCEL benefits.
|
47 |
+
* If you're building a complex chain (e.g., with branching, cycles, multiple agents, etc.) use [LangGraph](/docs/concepts/architecture#langgraph) instead. Remember that you can always use LCEL within individual nodes in LangGraph.
|
48 |
+
|
49 |
+
## Composition Primitives
|
50 |
+
|
51 |
+
`LCEL` chains are built by composing existing `Runnables` together. The two main composition primitives are [RunnableSequence](https://python.langchain.com/api_reference/core/runnables/langchain_core.runnables.base.RunnableSequence.html#langchain_core.runnables.base.RunnableSequence) and [RunnableParallel](https://python.langchain.com/api_reference/core/runnables/langchain_core.runnables.base.RunnableParallel.html#langchain_core.runnables.base.RunnableParallel).
|
52 |
+
|
53 |
+
Many other composition primitives (e.g., [RunnableAssign](
|
54 |
+
https://python.langchain.com/api_reference/core/runnables/langchain_core.runnables.passthrough.RunnableAssign.html#langchain_core.runnables.passthrough.RunnableAssign
|
55 |
+
)) can be thought of as variations of these two primitives.
|
56 |
+
|
57 |
+
:::note
|
58 |
+
You can find a list of all composition primitives in the [LangChain Core API Reference](https://python.langchain.com/api_reference/core/runnables.html).
|
59 |
+
:::
|
60 |
+
|
61 |
+
### RunnableSequence
|
62 |
+
|
63 |
+
`RunnableSequence` is a composition primitive that allows you "chain" multiple runnables sequentially, with the output of one runnable serving as the input to the next.
|
64 |
+
|
65 |
+
```python
|
66 |
+
from langchain_core.runnables import RunnableSequence
|
67 |
+
chain = RunnableSequence([runnable1, runnable2])
|
68 |
+
```
|
69 |
+
|
70 |
+
Invoking the `chain` with some input:
|
71 |
+
|
72 |
+
```python
|
73 |
+
final_output = chain.invoke(some_input)
|
74 |
+
```
|
75 |
+
|
76 |
+
corresponds to the following:
|
77 |
+
|
78 |
+
```python
|
79 |
+
output1 = runnable1.invoke(some_input)
|
80 |
+
final_output = runnable2.invoke(output1)
|
81 |
+
```
|
82 |
+
|
83 |
+
:::note
|
84 |
+
`runnable1` and `runnable2` are placeholders for any `Runnable` that you want to chain together.
|
85 |
+
:::
|
86 |
+
|
87 |
+
### RunnableParallel
|
88 |
+
|
89 |
+
`RunnableParallel` is a composition primitive that allows you to run multiple runnables concurrently, with the same input provided to each.
|
90 |
+
|
91 |
+
```python
|
92 |
+
from langchain_core.runnables import RunnableParallel
|
93 |
+
chain = RunnableParallel({
|
94 |
+
"key1": runnable1,
|
95 |
+
"key2": runnable2,
|
96 |
+
})
|
97 |
+
```
|
98 |
+
|
99 |
+
Invoking the `chain` with some input:
|
100 |
+
|
101 |
+
```python
|
102 |
+
final_output = chain.invoke(some_input)
|
103 |
+
```
|
104 |
+
|
105 |
+
Will yield a `final_output` dictionary with the same keys as the input dictionary, but with the values replaced by the output of the corresponding runnable.
|
106 |
+
|
107 |
+
```python
|
108 |
+
{
|
109 |
+
"key1": runnable1.invoke(some_input),
|
110 |
+
"key2": runnable2.invoke(some_input),
|
111 |
+
}
|
112 |
+
```
|
113 |
+
|
114 |
+
Recall, that the runnables are executed in parallel, so while the result is the same as
|
115 |
+
dictionary comprehension shown above, the execution time is much faster.
|
116 |
+
|
117 |
+
:::note
|
118 |
+
`RunnableParallel`supports both synchronous and asynchronous execution (as all `Runnables` do).
|
119 |
+
|
120 |
+
* For synchronous execution, `RunnableParallel` uses a [ThreadPoolExecutor](https://docs.python.org/3/library/concurrent.futures.html#concurrent.futures.ThreadPoolExecutor) to run the runnables concurrently.
|
121 |
+
* For asynchronous execution, `RunnableParallel` uses [asyncio.gather](https://docs.python.org/3/library/asyncio.html#asyncio.gather) to run the runnables concurrently.
|
122 |
+
:::
|
123 |
+
|
124 |
+
## Composition Syntax
|
125 |
+
|
126 |
+
The usage of `RunnableSequence` and `RunnableParallel` is so common that we created a shorthand syntax for using them. This helps
|
127 |
+
to make the code more readable and concise.
|
128 |
+
|
129 |
+
### The `|` operator
|
130 |
+
|
131 |
+
We have [overloaded](https://docs.python.org/3/reference/datamodel.html#special-method-names) the `|` operator to create a `RunnableSequence` from two `Runnables`.
|
132 |
+
|
133 |
+
```python
|
134 |
+
chain = runnable1 | runnable2
|
135 |
+
```
|
136 |
+
|
137 |
+
is Equivalent to:
|
138 |
+
|
139 |
+
```python
|
140 |
+
chain = RunnableSequence([runnable1, runnable2])
|
141 |
+
```
|
142 |
+
|
143 |
+
### The `.pipe` method
|
144 |
+
|
145 |
+
If you have moral qualms with operator overloading, you can use the `.pipe` method instead. This is equivalent to the `|` operator.
|
146 |
+
|
147 |
+
```python
|
148 |
+
chain = runnable1.pipe(runnable2)
|
149 |
+
```
|
150 |
+
|
151 |
+
### Coercion
|
152 |
+
|
153 |
+
LCEL applies automatic type coercion to make it easier to compose chains.
|
154 |
+
|
155 |
+
If you do not understand the type coercion, you can always use the `RunnableSequence` and `RunnableParallel` classes directly.
|
156 |
+
|
157 |
+
This will make the code more verbose, but it will also make it more explicit.
|
158 |
+
|
159 |
+
#### Dictionary to RunnableParallel
|
160 |
+
|
161 |
+
Inside an LCEL expression, a dictionary is automatically converted to a `RunnableParallel`.
|
162 |
+
|
163 |
+
For example, the following code:
|
164 |
+
|
165 |
+
```python
|
166 |
+
mapping = {
|
167 |
+
"key1": runnable1,
|
168 |
+
"key2": runnable2,
|
169 |
+
}
|
170 |
+
|
171 |
+
chain = mapping | runnable3
|
172 |
+
```
|
173 |
+
|
174 |
+
It gets automatically converted to the following:
|
175 |
+
|
176 |
+
```python
|
177 |
+
chain = RunnableSequence([RunnableParallel(mapping), runnable3])
|
178 |
+
```
|
179 |
+
|
180 |
+
:::caution
|
181 |
+
You have to be careful because the `mapping` dictionary is not a `RunnableParallel` object, it is just a dictionary. This means that the following code will raise an `AttributeError`:
|
182 |
+
|
183 |
+
```python
|
184 |
+
mapping.invoke(some_input)
|
185 |
+
```
|
186 |
+
:::
|
187 |
+
|
188 |
+
#### Function to RunnableLambda
|
189 |
+
|
190 |
+
Inside an LCEL expression, a function is automatically converted to a `RunnableLambda`.
|
191 |
+
|
192 |
+
```
|
193 |
+
def some_func(x):
|
194 |
+
return x
|
195 |
+
|
196 |
+
chain = some_func | runnable1
|
197 |
+
```
|
198 |
+
|
199 |
+
It gets automatically converted to the following:
|
200 |
+
|
201 |
+
```python
|
202 |
+
chain = RunnableSequence([RunnableLambda(some_func), runnable1])
|
203 |
+
```
|
204 |
+
|
205 |
+
:::caution
|
206 |
+
You have to be careful because the lambda function is not a `RunnableLambda` object, it is just a function. This means that the following code will raise an `AttributeError`:
|
207 |
+
|
208 |
+
```python
|
209 |
+
lambda x: x + 1.invoke(some_input)
|
210 |
+
```
|
211 |
+
:::
|
212 |
+
|
213 |
+
## Legacy chains
|
214 |
+
|
215 |
+
LCEL aims to provide consistency around behavior and customization over legacy subclassed chains such as `LLMChain` and
|
216 |
+
`ConversationalRetrievalChain`. Many of these legacy chains hide important details like prompts, and as a wider variety
|
217 |
+
of viable models emerge, customization has become more and more important.
|
218 |
+
|
219 |
+
If you are currently using one of these legacy chains, please see [this guide for guidance on how to migrate](/docs/versions/migrating_chains).
|
220 |
+
|
221 |
+
For guides on how to do specific tasks with LCEL, check out [the relevant how-to guides](/docs/how_to/#langchain-expression-language-lcel).
|
langchain_md_files/concepts/messages.mdx
ADDED
@@ -0,0 +1,245 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Messages
|
2 |
+
|
3 |
+
:::info Prerequisites
|
4 |
+
- [Chat Models](/docs/concepts/chat_models)
|
5 |
+
:::
|
6 |
+
|
7 |
+
## Overview
|
8 |
+
|
9 |
+
Messages are the unit of communication in [chat models](/docs/concepts/chat_models). They are used to represent the input and output of a chat model, as well as any additional context or metadata that may be associated with a conversation.
|
10 |
+
|
11 |
+
Each message has a **role** (e.g., "user", "assistant") and **content** (e.g., text, multimodal data) with additional metadata that varies depending on the chat model provider.
|
12 |
+
|
13 |
+
LangChain provides a unified message format that can be used across chat models, allowing users to work with different chat models without worrying about the specific details of the message format used by each model provider.
|
14 |
+
|
15 |
+
## What is inside a message?
|
16 |
+
|
17 |
+
A message typically consists of the following pieces of information:
|
18 |
+
|
19 |
+
- **Role**: The role of the message (e.g., "user", "assistant").
|
20 |
+
- **Content**: The content of the message (e.g., text, multimodal data).
|
21 |
+
- Additional metadata: id, name, [token usage](/docs/concepts/tokens) and other model-specific metadata.
|
22 |
+
|
23 |
+
### Role
|
24 |
+
|
25 |
+
Roles are used to distinguish between different types of messages in a conversation and help the chat model understand how to respond to a given sequence of messages.
|
26 |
+
|
27 |
+
| **Role** | **Description** |
|
28 |
+
|-----------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
29 |
+
| **system** | Used to tell the chat model how to behave and provide additional context. Not supported by all chat model providers. |
|
30 |
+
| **user** | Represents input from a user interacting with the model, usually in the form of text or other interactive input. |
|
31 |
+
| **assistant** | Represents a response from the model, which can include text or a request to invoke tools. |
|
32 |
+
| **tool** | A message used to pass the results of a tool invocation back to the model after external data or processing has been retrieved. Used with chat models that support [tool calling](/docs/concepts/tool_calling). |
|
33 |
+
| **function** (legacy) | This is a legacy role, corresponding to OpenAI's legacy function-calling API. **tool** role should be used instead. |
|
34 |
+
|
35 |
+
### Content
|
36 |
+
|
37 |
+
The content of a message text or a list of dictionaries representing [multimodal data](/docs/concepts/multimodality) (e.g., images, audio, video). The exact format of the content can vary between different chat model providers.
|
38 |
+
|
39 |
+
Currently, most chat models support text as the primary content type, with some models also supporting multimodal data. However, support for multimodal data is still limited across most chat model providers.
|
40 |
+
|
41 |
+
For more information see:
|
42 |
+
* [SystemMessage](#systemmessage) -- for content which should be passed to direct the conversation
|
43 |
+
* [HumanMessage](#humanmessage) -- for content in the input from the user.
|
44 |
+
* [AIMessage](#aimessage) -- for content in the response from the model.
|
45 |
+
* [Multimodality](/docs/concepts/multimodality) -- for more information on multimodal content.
|
46 |
+
|
47 |
+
### Other Message Data
|
48 |
+
|
49 |
+
Depending on the chat model provider, messages can include other data such as:
|
50 |
+
|
51 |
+
- **ID**: An optional unique identifier for the message.
|
52 |
+
- **Name**: An optional `name` property which allows differentiate between different entities/speakers with the same role. Not all models support this!
|
53 |
+
- **Metadata**: Additional information about the message, such as timestamps, token usage, etc.
|
54 |
+
- **Tool Calls**: A request made by the model to call one or more tools> See [tool calling](/docs/concepts/tool_calling) for more information.
|
55 |
+
|
56 |
+
## Conversation Structure
|
57 |
+
|
58 |
+
The sequence of messages into a chat model should follow a specific structure to ensure that the chat model can generate a valid response.
|
59 |
+
|
60 |
+
For example, a typical conversation structure might look like this:
|
61 |
+
|
62 |
+
1. **User Message**: "Hello, how are you?"
|
63 |
+
2. **Assistant Message**: "I'm doing well, thank you for asking."
|
64 |
+
3. **User Message**: "Can you tell me a joke?"
|
65 |
+
4. **Assistant Message**: "Sure! Why did the scarecrow win an award? Because he was outstanding in his field!"
|
66 |
+
|
67 |
+
Please read the [chat history](/docs/concepts/chat_history) guide for more information on managing chat history and ensuring that the conversation structure is correct.
|
68 |
+
|
69 |
+
## LangChain Messages
|
70 |
+
|
71 |
+
LangChain provides a unified message format that can be used across all chat models, allowing users to work with different chat models without worrying about the specific details of the message format used by each model provider.
|
72 |
+
|
73 |
+
LangChain messages are Python objects that subclass from a [BaseMessage](https://python.langchain.com/api_reference/core/messages/langchain_core.messages.base.BaseMessage.html).
|
74 |
+
|
75 |
+
The five main message types are:
|
76 |
+
|
77 |
+
- [SystemMessage](#systemmessage): corresponds to **system** role
|
78 |
+
- [HumanMessage](#humanmessage): corresponds to **user** role
|
79 |
+
- [AIMessage](#aimessage): corresponds to **assistant** role
|
80 |
+
- [AIMessageChunk](#aimessagechunk): corresponds to **assistant** role, used for [streaming](/docs/concepts/streaming) responses
|
81 |
+
- [ToolMessage](#toolmessage): corresponds to **tool** role
|
82 |
+
|
83 |
+
Other important messages include:
|
84 |
+
|
85 |
+
- [RemoveMessage](#removemessage) -- does not correspond to any role. This is an abstraction, mostly used in [LangGraph](/docs/concepts/architecture#langgraph) to manage chat history.
|
86 |
+
- **Legacy** [FunctionMessage](#legacy-functionmessage): corresponds to the **function** role in OpenAI's **legacy** function-calling API.
|
87 |
+
|
88 |
+
You can find more information about **messages** in the [API Reference](https://python.langchain.com/api_reference/core/messages.html).
|
89 |
+
|
90 |
+
### SystemMessage
|
91 |
+
|
92 |
+
A `SystemMessage` is used to prime the behavior of the AI model and provide additional context, such as instructing the model to adopt a specific persona or setting the tone of the conversation (e.g., "This is a conversation about cooking").
|
93 |
+
|
94 |
+
Different chat providers may support system message in one of the following ways:
|
95 |
+
|
96 |
+
* **Through a "system" message role**: In this case, a system message is included as part of the message sequence with the role explicitly set as "system."
|
97 |
+
* **Through a separate API parameter for system instructions**: Instead of being included as a message, system instructions are passed via a dedicated API parameter.
|
98 |
+
* **No support for system messages**: Some models do not support system messages at all.
|
99 |
+
|
100 |
+
Most major chat model providers support system instructions via either a chat message or a separate API parameter. LangChain will automatically adapt based on the provider’s capabilities. If the provider supports a separate API parameter for system instructions, LangChain will extract the content of a system message and pass it through that parameter.
|
101 |
+
|
102 |
+
If no system message is supported by the provider, in most cases LangChain will attempt to incorporate the system message's content into a HumanMessage or raise an exception if that is not possible. However, this behavior is not yet consistently enforced across all implementations, and if using a less popular implementation of a chat model (e.g., an implementation from the `langchain-community` package) it is recommended to check the specific documentation for that model.
|
103 |
+
|
104 |
+
### HumanMessage
|
105 |
+
|
106 |
+
The `HumanMessage` corresponds to the **"user"** role. A human message represents input from a user interacting with the model.
|
107 |
+
|
108 |
+
#### Text Content
|
109 |
+
|
110 |
+
Most chat models expect the user input to be in the form of text.
|
111 |
+
|
112 |
+
```python
|
113 |
+
from langchain_core.messages import HumanMessage
|
114 |
+
|
115 |
+
model.invoke([HumanMessage(content="Hello, how are you?")])
|
116 |
+
```
|
117 |
+
|
118 |
+
:::tip
|
119 |
+
When invoking a chat model with a string as input, LangChain will automatically convert the string into a `HumanMessage` object. This is mostly useful for quick testing.
|
120 |
+
|
121 |
+
```python
|
122 |
+
model.invoke("Hello, how are you?")
|
123 |
+
```
|
124 |
+
:::
|
125 |
+
|
126 |
+
#### Multi-modal Content
|
127 |
+
|
128 |
+
Some chat models accept multimodal inputs, such as images, audio, video, or files like PDFs.
|
129 |
+
|
130 |
+
Please see the [multimodality](/docs/concepts/multimodality) guide for more information.
|
131 |
+
|
132 |
+
### AIMessage
|
133 |
+
|
134 |
+
`AIMessage` is used to represent a message with the role **"assistant"**. This is the response from the model, which can include text or a request to invoke tools. It could also include other media types like images, audio, or video -- though this is still uncommon at the moment.
|
135 |
+
|
136 |
+
```python
|
137 |
+
from langchain_core.messages import HumanMessage
|
138 |
+
ai_message = model.invoke([HumanMessage("Tell me a joke")])
|
139 |
+
ai_message # <-- AIMessage
|
140 |
+
```
|
141 |
+
|
142 |
+
An `AIMessage` has the following attributes. The attributes which are **standardized** are the ones that LangChain attempts to standardize across different chat model providers. **raw** fields are specific to the model provider and may vary.
|
143 |
+
|
144 |
+
| Attribute | Standardized/Raw | Description |
|
145 |
+
|----------------------|:-----------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
146 |
+
| `content` | Raw | Usually a string, but can be a list of content blocks. See [content](#content) for details. |
|
147 |
+
| `tool_calls` | Standardized | Tool calls associated with the message. See [tool calling](/docs/concepts/tool_calling) for details. |
|
148 |
+
| `invalid_tool_calls` | Standardized | Tool calls with parsing errors associated with the message. See [tool calling](/docs/concepts/tool_calling) for details. |
|
149 |
+
| `usage_metadata` | Standardized | Usage metadata for a message, such as [token counts](/docs/concepts/tokens). See [Usage Metadata API Reference](https://python.langchain.com/api_reference/core/messages/langchain_core.messages.ai.UsageMetadata.html). |
|
150 |
+
| `id` | Standardized | An optional unique identifier for the message, ideally provided by the provider/model that created the message. |
|
151 |
+
| `response_metadata` | Raw | Response metadata, e.g., response headers, logprobs, token counts. |
|
152 |
+
|
153 |
+
#### content
|
154 |
+
|
155 |
+
The **content** property of an `AIMessage` represents the response generated by the chat model.
|
156 |
+
|
157 |
+
The content is either:
|
158 |
+
|
159 |
+
- **text** -- the norm for virtually all chat models.
|
160 |
+
- A **list of dictionaries** -- Each dictionary represents a content block and is associated with a `type`.
|
161 |
+
* Used by Anthropic for surfacing agent thought process when doing [tool calling](/docs/concepts/tool_calling).
|
162 |
+
* Used by OpenAI for audio outputs. Please see [multi-modal content](/docs/concepts/multimodality) for more information.
|
163 |
+
|
164 |
+
:::important
|
165 |
+
The **content** property is **not** standardized across different chat model providers, mostly because there are
|
166 |
+
still few examples to generalize from.
|
167 |
+
:::
|
168 |
+
|
169 |
+
### AIMessageChunk
|
170 |
+
|
171 |
+
It is common to [stream](/docs/concepts/streaming) responses for the chat model as they are being generated, so the user can see the response in real-time instead of waiting for the entire response to be generated before displaying it.
|
172 |
+
|
173 |
+
It is returned from the `stream`, `astream` and `astream_events` methods of the chat model.
|
174 |
+
|
175 |
+
For example,
|
176 |
+
|
177 |
+
```python
|
178 |
+
for chunk in model.stream([HumanMessage("what color is the sky?")]):
|
179 |
+
print(chunk)
|
180 |
+
```
|
181 |
+
|
182 |
+
`AIMessageChunk` follows nearly the same structure as `AIMessage`, but uses a different [ToolCallChunk](https://python.langchain.com/api_reference/core/messages/langchain_core.messages.tool.ToolCallChunk.html#langchain_core.messages.tool.ToolCallChunk)
|
183 |
+
to be able to stream tool calling in a standardized manner.
|
184 |
+
|
185 |
+
|
186 |
+
#### Aggregating
|
187 |
+
|
188 |
+
`AIMessageChunks` support the `+` operator to merge them into a single `AIMessage`. This is useful when you want to display the final response to the user.
|
189 |
+
|
190 |
+
```python
|
191 |
+
ai_message = chunk1 + chunk2 + chunk3 + ...
|
192 |
+
```
|
193 |
+
|
194 |
+
### ToolMessage
|
195 |
+
|
196 |
+
This represents a message with role "tool", which contains the result of [calling a tool](/docs/concepts/tool_calling). In addition to `role` and `content`, this message has:
|
197 |
+
|
198 |
+
- a `tool_call_id` field which conveys the id of the call to the tool that was called to produce this result.
|
199 |
+
- an `artifact` field which can be used to pass along arbitrary artifacts of the tool execution which are useful to track but which should not be sent to the model.
|
200 |
+
|
201 |
+
Please see [tool calling](/docs/concepts/tool_calling) for more information.
|
202 |
+
|
203 |
+
### RemoveMessage
|
204 |
+
|
205 |
+
This is a special message type that does not correspond to any roles. It is used
|
206 |
+
for managing chat history in [LangGraph](/docs/concepts/architecture#langgraph).
|
207 |
+
|
208 |
+
Please see the following for more information on how to use the `RemoveMessage`:
|
209 |
+
|
210 |
+
* [Memory conceptual guide](https://langchain-ai.github.io/langgraph/concepts/memory/)
|
211 |
+
* [How to delete messages](https://langchain-ai.github.io/langgraph/how-tos/memory/delete-messages/)
|
212 |
+
|
213 |
+
### (Legacy) FunctionMessage
|
214 |
+
|
215 |
+
This is a legacy message type, corresponding to OpenAI's legacy function-calling API. `ToolMessage` should be used instead to correspond to the updated tool-calling API.
|
216 |
+
|
217 |
+
## OpenAI Format
|
218 |
+
|
219 |
+
### Inputs
|
220 |
+
|
221 |
+
Chat models also accept OpenAI's format as **inputs** to chat models:
|
222 |
+
|
223 |
+
```python
|
224 |
+
chat_model.invoke([
|
225 |
+
{
|
226 |
+
"role": "user",
|
227 |
+
"content": "Hello, how are you?",
|
228 |
+
},
|
229 |
+
{
|
230 |
+
"role": "assistant",
|
231 |
+
"content": "I'm doing well, thank you for asking.",
|
232 |
+
},
|
233 |
+
{
|
234 |
+
"role": "user",
|
235 |
+
"content": "Can you tell me a joke?",
|
236 |
+
}
|
237 |
+
])
|
238 |
+
```
|
239 |
+
|
240 |
+
### Outputs
|
241 |
+
|
242 |
+
At the moment, the output of the model will be in terms of LangChain messages, so you will need to convert the output to the OpenAI format if you
|
243 |
+
need OpenAI format for the output as well.
|
244 |
+
|
245 |
+
The [convert_to_openai_messages](https://python.langchain.com/api_reference/core/messages/langchain_core.messages.utils.convert_to_openai_messages.html) utility function can be used to convert from LangChain messages to OpenAI format.
|
langchain_md_files/concepts/multimodality.mdx
ADDED
@@ -0,0 +1,88 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Multimodality
|
2 |
+
|
3 |
+
## Overview
|
4 |
+
|
5 |
+
**Multimodality** refers to the ability to work with data that comes in different forms, such as text, audio, images, and video. Multimodality can appear in various components, allowing models and systems to handle and process a mix of these data types seamlessly.
|
6 |
+
|
7 |
+
- **Chat Models**: These could, in theory, accept and generate multimodal inputs and outputs, handling a variety of data types like text, images, audio, and video.
|
8 |
+
- **Embedding Models**: Embedding Models can represent multimodal content, embedding various forms of data—such as text, images, and audio—into vector spaces.
|
9 |
+
- **Vector Stores**: Vector stores could search over embeddings that represent multimodal data, enabling retrieval across different types of information.
|
10 |
+
|
11 |
+
## Multimodality in chat models
|
12 |
+
|
13 |
+
:::info Pre-requisites
|
14 |
+
* [Chat models](/docs/concepts/chat_models)
|
15 |
+
* [Messages](/docs/concepts/messages)
|
16 |
+
:::
|
17 |
+
|
18 |
+
Multimodal support is still relatively new and less common, model providers have not yet standardized on the "best" way to define the API. As such, LangChain's multimodal abstractions are lightweight and flexible, designed to accommodate different model providers' APIs and interaction patterns, but are **not** standardized across models.
|
19 |
+
|
20 |
+
### How to use multimodal models
|
21 |
+
|
22 |
+
* Use the [chat model integration table](/docs/integrations/chat/) to identify which models support multimodality.
|
23 |
+
* Reference the [relevant how-to guides](/docs/how_to/#multimodal) for specific examples of how to use multimodal models.
|
24 |
+
|
25 |
+
### What kind of multimodality is supported?
|
26 |
+
|
27 |
+
#### Inputs
|
28 |
+
|
29 |
+
Some models can accept multimodal inputs, such as images, audio, video, or files. The types of multimodal inputs supported depend on the model provider. For instance, [Google's Gemini](/docs/integrations/chat/google_generative_ai/) supports documents like PDFs as inputs.
|
30 |
+
|
31 |
+
Most chat models that support **multimodal inputs** also accept those values in OpenAI's content blocks format. So far this is restricted to image inputs. For models like Gemini which support video and other bytes input, the APIs also support the native, model-specific representations.
|
32 |
+
|
33 |
+
The gist of passing multimodal inputs to a chat model is to use content blocks that specify a type and corresponding data. For example, to pass an image to a chat model:
|
34 |
+
|
35 |
+
```python
|
36 |
+
from langchain_core.messages import HumanMessage
|
37 |
+
|
38 |
+
message = HumanMessage(
|
39 |
+
content=[
|
40 |
+
{"type": "text", "text": "describe the weather in this image"},
|
41 |
+
{"type": "image_url", "image_url": {"url": image_url}},
|
42 |
+
],
|
43 |
+
)
|
44 |
+
response = model.invoke([message])
|
45 |
+
```
|
46 |
+
|
47 |
+
:::caution
|
48 |
+
The exact format of the content blocks may vary depending on the model provider. Please refer to the chat model's
|
49 |
+
integration documentation for the correct format. Find the integration in the [chat model integration table](/docs/integrations/chat/).
|
50 |
+
:::
|
51 |
+
|
52 |
+
#### Outputs
|
53 |
+
|
54 |
+
Virtually no popular chat models support multimodal outputs at the time of writing (October 2024).
|
55 |
+
|
56 |
+
The only exception is OpenAI's chat model ([gpt-4o-audio-preview](/docs/integrations/chat/openai/)), which can generate audio outputs.
|
57 |
+
|
58 |
+
Multimodal outputs will appear as part of the [AIMessage](/docs/concepts/messages/#aimessage) response object.
|
59 |
+
|
60 |
+
Please see the [ChatOpenAI](/docs/integrations/chat/openai/) for more information on how to use multimodal outputs.
|
61 |
+
|
62 |
+
#### Tools
|
63 |
+
|
64 |
+
Currently, no chat model is designed to work **directly** with multimodal data in a [tool call request](/docs/concepts/tool_calling) or [ToolMessage](/docs/concepts/tool_calling) result.
|
65 |
+
|
66 |
+
However, a chat model can easily interact with multimodal data by invoking tools with references (e.g., a URL) to the multimodal data, rather than the data itself. For example, any model capable of [tool calling](/docs/concepts/tool_calling) can be equipped with tools to download and process images, audio, or video.
|
67 |
+
|
68 |
+
## Multimodality in embedding models
|
69 |
+
|
70 |
+
:::info Prerequisites
|
71 |
+
* [Embedding Models](/docs/concepts/embedding_models)
|
72 |
+
:::
|
73 |
+
|
74 |
+
**Embeddings** are vector representations of data used for tasks like similarity search and retrieval.
|
75 |
+
|
76 |
+
The current [embedding interface](https://python.langchain.com/api_reference/core/embeddings/langchain_core.embeddings.embeddings.Embeddings.html#langchain_core.embeddings.embeddings.Embeddings) used in LangChain is optimized entirely for text-based data, and will **not** work with multimodal data.
|
77 |
+
|
78 |
+
As use cases involving multimodal search and retrieval tasks become more common, we expect to expand the embedding interface to accommodate other data types like images, audio, and video.
|
79 |
+
|
80 |
+
## Multimodality in vector stores
|
81 |
+
|
82 |
+
:::info Prerequisites
|
83 |
+
* [Vector stores](/docs/concepts/vectorstores)
|
84 |
+
:::
|
85 |
+
|
86 |
+
Vector stores are databases for storing and retrieving embeddings, which are typically used in search and retrieval tasks. Similar to embeddings, vector stores are currently optimized for text-based data.
|
87 |
+
|
88 |
+
As use cases involving multimodal search and retrieval tasks become more common, we expect to expand the vector store interface to accommodate other data types like images, audio, and video.
|
langchain_md_files/concepts/output_parsers.mdx
ADDED
@@ -0,0 +1,42 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Output parsers
|
2 |
+
|
3 |
+
<span data-heading-keywords="output parser"></span>
|
4 |
+
|
5 |
+
:::note
|
6 |
+
|
7 |
+
The information here refers to parsers that take a text output from a model try to parse it into a more structured representation.
|
8 |
+
More and more models are supporting function (or tool) calling, which handles this automatically.
|
9 |
+
It is recommended to use function/tool calling rather than output parsing.
|
10 |
+
See documentation for that [here](/docs/concepts/tool_calling).
|
11 |
+
|
12 |
+
:::
|
13 |
+
|
14 |
+
`Output parser` is responsible for taking the output of a model and transforming it to a more suitable format for downstream tasks.
|
15 |
+
Useful when you are using LLMs to generate structured data, or to normalize output from chat models and LLMs.
|
16 |
+
|
17 |
+
LangChain has lots of different types of output parsers. This is a list of output parsers LangChain supports. The table below has various pieces of information:
|
18 |
+
|
19 |
+
- **Name**: The name of the output parser
|
20 |
+
- **Supports Streaming**: Whether the output parser supports streaming.
|
21 |
+
- **Has Format Instructions**: Whether the output parser has format instructions. This is generally available except when (a) the desired schema is not specified in the prompt but rather in other parameters (like OpenAI function calling), or (b) when the OutputParser wraps another OutputParser.
|
22 |
+
- **Calls LLM**: Whether this output parser itself calls an LLM. This is usually only done by output parsers that attempt to correct misformatted output.
|
23 |
+
- **Input Type**: Expected input type. Most output parsers work on both strings and messages, but some (like OpenAI Functions) need a message with specific kwargs.
|
24 |
+
- **Output Type**: The output type of the object returned by the parser.
|
25 |
+
- **Description**: Our commentary on this output parser and when to use it.
|
26 |
+
|
27 |
+
| Name | Supports Streaming | Has Format Instructions | Calls LLM | Input Type | Output Type | Description |
|
28 |
+
|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------|-------------------------|-----------|--------------------|----------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
29 |
+
| [Str](https://python.langchain.com/api_reference/core/output_parsers/langchain_core.output_parsers.string.StrOutputParser.html) | ✅ | | | `str` \| `Message` | String | Parses texts from message objects. Useful for handling variable formats of message content (e.g., extracting text from content blocks). |
|
30 |
+
| [JSON](https://python.langchain.com/api_reference/core/output_parsers/langchain_core.output_parsers.json.JsonOutputParser.html) | ✅ | ✅ | | `str` \| `Message` | JSON object | Returns a JSON object as specified. You can specify a Pydantic model and it will return JSON for that model. Probably the most reliable output parser for getting structured data that does NOT use function calling. |
|
31 |
+
| [XML](https://python.langchain.com/api_reference/core/output_parsers/langchain_core.output_parsers.xml.XMLOutputParser.html#langchain_core.output_parsers.xml.XMLOutputParser) | ✅ | ✅ | | `str` \| `Message` | `dict` | Returns a dictionary of tags. Use when XML output is needed. Use with models that are good at writing XML (like Anthropic's). |
|
32 |
+
| [CSV](https://python.langchain.com/api_reference/core/output_parsers/langchain_core.output_parsers.list.CommaSeparatedListOutputParser.html#langchain_core.output_parsers.list.CommaSeparatedListOutputParser) | ✅ | ✅ | | `str` \| `Message` | `List[str]` | Returns a list of comma separated values. |
|
33 |
+
| [OutputFixing](https://python.langchain.com/api_reference/langchain/output_parsers/langchain.output_parsers.fix.OutputFixingParser.html#langchain.output_parsers.fix.OutputFixingParser) | | | ✅ | `str` \| `Message` | | Wraps another output parser. If that output parser errors, then this will pass the error message and the bad output to an LLM and ask it to fix the output. |
|
34 |
+
| [RetryWithError](https://python.langchain.com/api_reference/langchain/output_parsers/langchain.output_parsers.retry.RetryWithErrorOutputParser.html#langchain.output_parsers.retry.RetryWithErrorOutputParser) | | | ✅ | `str` \| `Message` | | Wraps another output parser. If that output parser errors, then this will pass the original inputs, the bad output, and the error message to an LLM and ask it to fix it. Compared to OutputFixingParser, this one also sends the original instructions. |
|
35 |
+
| [Pydantic](https://python.langchain.com/api_reference/core/output_parsers/langchain_core.output_parsers.pydantic.PydanticOutputParser.html#langchain_core.output_parsers.pydantic.PydanticOutputParser) | | ✅ | | `str` \| `Message` | `pydantic.BaseModel` | Takes a user defined Pydantic model and returns data in that format. |
|
36 |
+
| [YAML](https://python.langchain.com/api_reference/langchain/output_parsers/langchain.output_parsers.yaml.YamlOutputParser.html#langchain.output_parsers.yaml.YamlOutputParser) | | ✅ | | `str` \| `Message` | `pydantic.BaseModel` | Takes a user defined Pydantic model and returns data in that format. Uses YAML to encode it. |
|
37 |
+
| [PandasDataFrame](https://python.langchain.com/api_reference/langchain/output_parsers/langchain.output_parsers.pandas_dataframe.PandasDataFrameOutputParser.html#langchain.output_parsers.pandas_dataframe.PandasDataFrameOutputParser) | | ✅ | | `str` \| `Message` | `dict` | Useful for doing operations with pandas DataFrames. |
|
38 |
+
| [Enum](https://python.langchain.com/api_reference/langchain/output_parsers/langchain.output_parsers.enum.EnumOutputParser.html#langchain.output_parsers.enum.EnumOutputParser) | | ✅ | | `str` \| `Message` | `Enum` | Parses response into one of the provided enum values. |
|
39 |
+
| [Datetime](https://python.langchain.com/api_reference/langchain/output_parsers/langchain.output_parsers.datetime.DatetimeOutputParser.html#langchain.output_parsers.datetime.DatetimeOutputParser) | | ✅ | | `str` \| `Message` | `datetime.datetime` | Parses response into a datetime string. |
|
40 |
+
| [Structured](https://python.langchain.com/api_reference/langchain/output_parsers/langchain.output_parsers.structured.StructuredOutputParser.html#langchain.output_parsers.structured.StructuredOutputParser) | | ✅ | | `str` \| `Message` | `Dict[str, str]` | An output parser that returns structured information. It is less powerful than other output parsers since it only allows for fields to be strings. This can be useful when you are working with smaller LLMs. |
|
41 |
+
|
42 |
+
For specifics on how to use output parsers, see the [relevant how-to guides here](/docs/how_to/#output-parsers).
|
langchain_md_files/concepts/prompt_templates.mdx
ADDED
@@ -0,0 +1,79 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Prompt Templates
|
2 |
+
|
3 |
+
Prompt templates help to translate user input and parameters into instructions for a language model.
|
4 |
+
This can be used to guide a model's response, helping it understand the context and generate relevant and coherent language-based output.
|
5 |
+
|
6 |
+
Prompt Templates take as input a dictionary, where each key represents a variable in the prompt template to fill in.
|
7 |
+
|
8 |
+
Prompt Templates output a PromptValue. This PromptValue can be passed to an LLM or a ChatModel, and can also be cast to a string or a list of messages.
|
9 |
+
The reason this PromptValue exists is to make it easy to switch between strings and messages.
|
10 |
+
|
11 |
+
There are a few different types of prompt templates:
|
12 |
+
|
13 |
+
## String PromptTemplates
|
14 |
+
|
15 |
+
These prompt templates are used to format a single string, and generally are used for simpler inputs.
|
16 |
+
For example, a common way to construct and use a PromptTemplate is as follows:
|
17 |
+
|
18 |
+
```python
|
19 |
+
from langchain_core.prompts import PromptTemplate
|
20 |
+
|
21 |
+
prompt_template = PromptTemplate.from_template("Tell me a joke about {topic}")
|
22 |
+
|
23 |
+
prompt_template.invoke({"topic": "cats"})
|
24 |
+
```
|
25 |
+
|
26 |
+
## ChatPromptTemplates
|
27 |
+
|
28 |
+
These prompt templates are used to format a list of messages. These "templates" consist of a list of templates themselves.
|
29 |
+
For example, a common way to construct and use a ChatPromptTemplate is as follows:
|
30 |
+
|
31 |
+
```python
|
32 |
+
from langchain_core.prompts import ChatPromptTemplate
|
33 |
+
|
34 |
+
prompt_template = ChatPromptTemplate([
|
35 |
+
("system", "You are a helpful assistant"),
|
36 |
+
("user", "Tell me a joke about {topic}")
|
37 |
+
])
|
38 |
+
|
39 |
+
prompt_template.invoke({"topic": "cats"})
|
40 |
+
```
|
41 |
+
|
42 |
+
In the above example, this ChatPromptTemplate will construct two messages when called.
|
43 |
+
The first is a system message, that has no variables to format.
|
44 |
+
The second is a HumanMessage, and will be formatted by the `topic` variable the user passes in.
|
45 |
+
|
46 |
+
## MessagesPlaceholder
|
47 |
+
<span data-heading-keywords="messagesplaceholder"></span>
|
48 |
+
|
49 |
+
This prompt template is responsible for adding a list of messages in a particular place.
|
50 |
+
In the above ChatPromptTemplate, we saw how we could format two messages, each one a string.
|
51 |
+
But what if we wanted the user to pass in a list of messages that we would slot into a particular spot?
|
52 |
+
This is how you use MessagesPlaceholder.
|
53 |
+
|
54 |
+
```python
|
55 |
+
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
|
56 |
+
from langchain_core.messages import HumanMessage
|
57 |
+
|
58 |
+
prompt_template = ChatPromptTemplate([
|
59 |
+
("system", "You are a helpful assistant"),
|
60 |
+
MessagesPlaceholder("msgs")
|
61 |
+
])
|
62 |
+
|
63 |
+
prompt_template.invoke({"msgs": [HumanMessage(content="hi!")]})
|
64 |
+
```
|
65 |
+
|
66 |
+
This will produce a list of two messages, the first one being a system message, and the second one being the HumanMessage we passed in.
|
67 |
+
If we had passed in 5 messages, then it would have produced 6 messages in total (the system message plus the 5 passed in).
|
68 |
+
This is useful for letting a list of messages be slotted into a particular spot.
|
69 |
+
|
70 |
+
An alternative way to accomplish the same thing without using the `MessagesPlaceholder` class explicitly is:
|
71 |
+
|
72 |
+
```python
|
73 |
+
prompt_template = ChatPromptTemplate([
|
74 |
+
("system", "You are a helpful assistant"),
|
75 |
+
("placeholder", "{msgs}") # <-- This is the changed part
|
76 |
+
])
|
77 |
+
```
|
78 |
+
|
79 |
+
For specifics on how to use prompt templates, see the [relevant how-to guides here](/docs/how_to/#prompt-templates).
|
langchain_md_files/concepts/rag.mdx
ADDED
@@ -0,0 +1,98 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Retrieval augmented generation (RAG)
|
2 |
+
|
3 |
+
:::info[Prerequisites]
|
4 |
+
|
5 |
+
* [Retrieval](/docs/concepts/retrieval/)
|
6 |
+
|
7 |
+
:::
|
8 |
+
|
9 |
+
## Overview
|
10 |
+
|
11 |
+
Retrieval Augmented Generation (RAG) is a powerful technique that enhances [language models](/docs/concepts/chat_models/) by combining them with external knowledge bases.
|
12 |
+
RAG addresses [a key limitation of models](https://www.glean.com/blog/how-to-build-an-ai-assistant-for-the-enterprise): models rely on fixed training datasets, which can lead to outdated or incomplete information.
|
13 |
+
When given a query, RAG systems first search a knowledge base for relevant information.
|
14 |
+
The system then incorporates this retrieved information into the model's prompt.
|
15 |
+
The model uses the provided context to generate a response to the query.
|
16 |
+
By bridging the gap between vast language models and dynamic, targeted information retrieval, RAG is a powerful technique for building more capable and reliable AI systems.
|
17 |
+
|
18 |
+
## Key concepts
|
19 |
+
|
20 |
+

|
21 |
+
|
22 |
+
(1) **Retrieval system**: Retrieve relevant information from a knowledge base.
|
23 |
+
|
24 |
+
(2) **Adding external knowledge**: Pass retrieved information to a model.
|
25 |
+
|
26 |
+
## Retrieval system
|
27 |
+
|
28 |
+
Model's have internal knowledge that is often fixed, or at least not updated frequently due to the high cost of training.
|
29 |
+
This limits their ability to answer questions about current events, or to provide specific domain knowledge.
|
30 |
+
To address this, there are various knowledge injection techniques like [fine-tuning](https://hamel.dev/blog/posts/fine_tuning_valuable.html) or continued pre-training.
|
31 |
+
Both are [costly](https://www.glean.com/blog/how-to-build-an-ai-assistant-for-the-enterprise) and often [poorly suited](https://www.anyscale.com/blog/fine-tuning-is-for-form-not-facts) for factual retrieval.
|
32 |
+
Using a retrieval system offers several advantages:
|
33 |
+
|
34 |
+
- **Up-to-date information**: RAG can access and utilize the latest data, keeping responses current.
|
35 |
+
- **Domain-specific expertise**: With domain-specific knowledge bases, RAG can provide answers in specific domains.
|
36 |
+
- **Reduced hallucination**: Grounding responses in retrieved facts helps minimize false or invented information.
|
37 |
+
- **Cost-effective knowledge integration**: RAG offers a more efficient alternative to expensive model fine-tuning.
|
38 |
+
|
39 |
+
:::info[Further reading]
|
40 |
+
|
41 |
+
See our conceptual guide on [retrieval](/docs/concepts/retrieval/).
|
42 |
+
|
43 |
+
:::
|
44 |
+
|
45 |
+
## Adding external knowledge
|
46 |
+
|
47 |
+
With a retrieval system in place, we need to pass knowledge from this system to the model.
|
48 |
+
A RAG pipeline typically achieves this following these steps:
|
49 |
+
|
50 |
+
- Receive an input query.
|
51 |
+
- Use the retrieval system to search for relevant information based on the query.
|
52 |
+
- Incorporate the retrieved information into the prompt sent to the LLM.
|
53 |
+
- Generate a response that leverages the retrieved context.
|
54 |
+
|
55 |
+
As an example, here's a simple RAG workflow that passes information from a [retriever](/docs/concepts/retrievers/) to a [chat model](/docs/concepts/chat_models/):
|
56 |
+
|
57 |
+
```python
|
58 |
+
from langchain_openai import ChatOpenAI
|
59 |
+
from langchain_core.messages import SystemMessage, HumanMessage
|
60 |
+
|
61 |
+
# Define a system prompt that tells the model how to use the retrieved context
|
62 |
+
system_prompt = """You are an assistant for question-answering tasks.
|
63 |
+
Use the following pieces of retrieved context to answer the question.
|
64 |
+
If you don't know the answer, just say that you don't know.
|
65 |
+
Use three sentences maximum and keep the answer concise.
|
66 |
+
Context: {context}:"""
|
67 |
+
|
68 |
+
# Define a question
|
69 |
+
question = """What are the main components of an LLM-powered autonomous agent system?"""
|
70 |
+
|
71 |
+
# Retrieve relevant documents
|
72 |
+
docs = retriever.invoke(question)
|
73 |
+
|
74 |
+
# Combine the documents into a single string
|
75 |
+
docs_text = "".join(d.page_content for d in docs)
|
76 |
+
|
77 |
+
# Populate the system prompt with the retrieved context
|
78 |
+
system_prompt_fmt = system_prompt.format(context=docs_text)
|
79 |
+
|
80 |
+
# Create a model
|
81 |
+
model = ChatOpenAI(model="gpt-4o", temperature=0)
|
82 |
+
|
83 |
+
# Generate a response
|
84 |
+
questions = model.invoke([SystemMessage(content=system_prompt_fmt),
|
85 |
+
HumanMessage(content=question)])
|
86 |
+
```
|
87 |
+
|
88 |
+
:::info[Further reading]
|
89 |
+
|
90 |
+
RAG a deep area with many possible optimization and design choices:
|
91 |
+
|
92 |
+
* See [this excellent blog](https://cameronrwolfe.substack.com/p/a-practitioners-guide-to-retrieval?utm_source=profile&utm_medium=reader2) from Cameron Wolfe for a comprehensive overview and history of RAG.
|
93 |
+
* See our [RAG how-to guides](/docs/how_to/#qa-with-rag).
|
94 |
+
* See our RAG [tutorials](/docs/tutorials/).
|
95 |
+
* See our RAG from Scratch course, with [code](https://github.com/langchain-ai/rag-from-scratch) and [video playlist](https://www.youtube.com/playlist?list=PLfaIDFEXuae2LXbO1_PKyVJiQ23ZztA0x).
|
96 |
+
* Also, see our RAG from Scratch course [on Freecodecamp](https://youtu.be/sVcwVQRHIc8?feature=shared).
|
97 |
+
|
98 |
+
:::
|
langchain_md_files/concepts/retrieval.mdx
ADDED
@@ -0,0 +1,242 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Retrieval
|
2 |
+
|
3 |
+
:::info[Prerequisites]
|
4 |
+
|
5 |
+
* [Retrievers](/docs/concepts/retrievers/)
|
6 |
+
* [Vector stores](/docs/concepts/vectorstores/)
|
7 |
+
* [Embeddings](/docs/concepts/embedding_models/)
|
8 |
+
* [Text splitters](/docs/concepts/text_splitters/)
|
9 |
+
|
10 |
+
:::
|
11 |
+
|
12 |
+
:::danger[Security]
|
13 |
+
|
14 |
+
Some of the concepts reviewed here utilize models to generate queries (e.g., for SQL or graph databases).
|
15 |
+
There are inherent risks in doing this.
|
16 |
+
Make sure that your database connection permissions are scoped as narrowly as possible for your application's needs.
|
17 |
+
This will mitigate, though not eliminate, the risks of building a model-driven system capable of querying databases.
|
18 |
+
For more on general security best practices, see our [security guide](/docs/security/).
|
19 |
+
|
20 |
+
:::
|
21 |
+
|
22 |
+
## Overview
|
23 |
+
|
24 |
+
Retrieval systems are fundamental to many AI applications, efficiently identifying relevant information from large datasets.
|
25 |
+
These systems accommodate various data formats:
|
26 |
+
|
27 |
+
- Unstructured text (e.g., documents) is often stored in vector stores or lexical search indexes.
|
28 |
+
- Structured data is typically housed in relational or graph databases with defined schemas.
|
29 |
+
|
30 |
+
Despite the growing diversity in data formats, modern AI applications increasingly aim to make all types of data accessible through natural language interfaces.
|
31 |
+
Models play a crucial role in this process by translating natural language queries into formats compatible with the underlying search index or database.
|
32 |
+
This translation enables more intuitive and flexible interactions with complex data structures.
|
33 |
+
|
34 |
+
## Key concepts
|
35 |
+
|
36 |
+

|
37 |
+
|
38 |
+
(1) **Query analysis**: A process where models transform or construct search queries to optimize retrieval.
|
39 |
+
|
40 |
+
(2) **Information retrieval**: Search queries are used to fetch information from various retrieval systems.
|
41 |
+
|
42 |
+
## Query analysis
|
43 |
+
|
44 |
+
While users typically prefer to interact with retrieval systems using natural language, these systems may require specific query syntax or benefit from certain keywords.
|
45 |
+
Query analysis serves as a bridge between raw user input and optimized search queries. Some common applications of query analysis include:
|
46 |
+
|
47 |
+
1. **Query Re-writing**: Queries can be re-written or expanded to improve semantic or lexical searches.
|
48 |
+
2. **Query Construction**: Search indexes may require structured queries (e.g., SQL for databases).
|
49 |
+
|
50 |
+
Query analysis employs models to transform or construct optimized search queries from raw user input.
|
51 |
+
|
52 |
+
### Query re-writing
|
53 |
+
|
54 |
+
Retrieval systems should ideally handle a wide spectrum of user inputs, from simple and poorly worded queries to complex, multi-faceted questions.
|
55 |
+
To achieve this versatility, a popular approach is to use models to transform raw user queries into more effective search queries.
|
56 |
+
This transformation can range from simple keyword extraction to sophisticated query expansion and reformulation.
|
57 |
+
Here are some key benefits of using models for query analysis in unstructured data retrieval:
|
58 |
+
|
59 |
+
1. **Query Clarification**: Models can rephrase ambiguous or poorly worded queries for clarity.
|
60 |
+
2. **Semantic Understanding**: They can capture the intent behind a query, going beyond literal keyword matching.
|
61 |
+
3. **Query Expansion**: Models can generate related terms or concepts to broaden the search scope.
|
62 |
+
4. **Complex Query Handling**: They can break down multi-part questions into simpler sub-queries.
|
63 |
+
|
64 |
+
Various techniques have been developed to leverage models for query re-writing, including:
|
65 |
+
|
66 |
+
| Name | When to use | Description |
|
67 |
+
|-----------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
68 |
+
| [Multi-query](/docs/how_to/MultiQueryRetriever/) | When you want to ensure high recall in retrieval by providing multiple phrasings of a question. | Rewrite the user question with multiple phrasings, retrieve documents for each rewritten question, return the unique documents for all queries. |
|
69 |
+
| [Decomposition](https://github.com/langchain-ai/rag-from-scratch/blob/main/rag_from_scratch_5_to_9.ipynb) | When a question can be broken down into smaller subproblems. | Decompose a question into a set of subproblems / questions, which can either be solved sequentially (use the answer from first + retrieval to answer the second) or in parallel (consolidate each answer into final answer). |
|
70 |
+
| [Step-back](https://github.com/langchain-ai/rag-from-scratch/blob/main/rag_from_scratch_5_to_9.ipynb) | When a higher-level conceptual understanding is required. | First prompt the LLM to ask a generic step-back question about higher-level concepts or principles, and retrieve relevant facts about them. Use this grounding to help answer the user question. [Paper](https://arxiv.org/pdf/2310.06117). |
|
71 |
+
| [HyDE](https://github.com/langchain-ai/rag-from-scratch/blob/main/rag_from_scratch_5_to_9.ipynb) | If you have challenges retrieving relevant documents using the raw user inputs. | Use an LLM to convert questions into hypothetical documents that answer the question. Use the embedded hypothetical documents to retrieve real documents with the premise that doc-doc similarity search can produce more relevant matches. [Paper](https://arxiv.org/abs/2212.10496). |
|
72 |
+
|
73 |
+
As an example, query decomposition can simply be accomplished using prompting and a structured output that enforces a list of sub-questions.
|
74 |
+
These can then be run sequentially or in parallel on a downstream retrieval system.
|
75 |
+
|
76 |
+
```python
|
77 |
+
from typing import List
|
78 |
+
|
79 |
+
from pydantic import BaseModel, Field
|
80 |
+
from langchain_openai import ChatOpenAI
|
81 |
+
from langchain_core.messages import SystemMessage, HumanMessage
|
82 |
+
|
83 |
+
# Define a pydantic model to enforce the output structure
|
84 |
+
class Questions(BaseModel):
|
85 |
+
questions: List[str] = Field(
|
86 |
+
description="A list of sub-questions related to the input query."
|
87 |
+
)
|
88 |
+
|
89 |
+
# Create an instance of the model and enforce the output structure
|
90 |
+
model = ChatOpenAI(model="gpt-4o", temperature=0)
|
91 |
+
structured_model = model.with_structured_output(Questions)
|
92 |
+
|
93 |
+
# Define the system prompt
|
94 |
+
system = """You are a helpful assistant that generates multiple sub-questions related to an input question. \n
|
95 |
+
The goal is to break down the input into a set of sub-problems / sub-questions that can be answers in isolation. \n"""
|
96 |
+
|
97 |
+
# Pass the question to the model
|
98 |
+
question = """What are the main components of an LLM-powered autonomous agent system?"""
|
99 |
+
questions = structured_model.invoke([SystemMessage(content=system)]+[HumanMessage(content=question)])
|
100 |
+
```
|
101 |
+
|
102 |
+
:::tip
|
103 |
+
|
104 |
+
See our RAG from Scratch videos for a few different specific approaches:
|
105 |
+
- [Multi-query](https://youtu.be/JChPi0CRnDY?feature=shared)
|
106 |
+
- [Decomposition](https://youtu.be/h0OPWlEOank?feature=shared)
|
107 |
+
- [Step-back](https://youtu.be/xn1jEjRyJ2U?feature=shared)
|
108 |
+
- [HyDE](https://youtu.be/SaDzIVkYqyY?feature=shared)
|
109 |
+
|
110 |
+
:::
|
111 |
+
|
112 |
+
### Query construction
|
113 |
+
|
114 |
+
Query analysis also can focus on translating natural language queries into specialized query languages or filters.
|
115 |
+
This translation is crucial for effectively interacting with various types of databases that house structured or semi-structured data.
|
116 |
+
|
117 |
+
1. **Structured Data examples**: For relational and graph databases, Domain-Specific Languages (DSLs) are used to query data.
|
118 |
+
- **Text-to-SQL**: [Converts natural language to SQL](https://paperswithcode.com/task/text-to-sql) for relational databases.
|
119 |
+
- **Text-to-Cypher**: [Converts natural language to Cypher](https://neo4j.com/labs/neodash/2.4/user-guide/extensions/natural-language-queries/) for graph databases.
|
120 |
+
|
121 |
+
2. **Semi-structured Data examples**: For vectorstores, queries can combine semantic search with metadata filtering.
|
122 |
+
- **Natural Language to Metadata Filters**: Converts user queries into [appropriate metadata filters](https://docs.pinecone.io/guides/data/filter-with-metadata).
|
123 |
+
|
124 |
+
These approaches leverage models to bridge the gap between user intent and the specific query requirements of different data storage systems. Here are some popular techniques:
|
125 |
+
|
126 |
+
| Name | When to Use | Description |
|
127 |
+
|------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
128 |
+
| [Self Query](/docs/how_to/self_query/) | If users are asking questions that are better answered by fetching documents based on metadata rather than similarity with the text. | This uses an LLM to transform user input into two things: (1) a string to look up semantically, (2) a metadata filter to go along with it. This is useful because oftentimes questions are about the METADATA of documents (not the content itself). |
|
129 |
+
| [Text to SQL](/docs/tutorials/sql_qa/) | If users are asking questions that require information housed in a relational database, accessible via SQL. | This uses an LLM to transform user input into a SQL query. |
|
130 |
+
| [Text-to-Cypher](/docs/tutorials/graph/) | If users are asking questions that require information housed in a graph database, accessible via Cypher. | This uses an LLM to transform user input into a Cypher query. |
|
131 |
+
|
132 |
+
As an example, here is how to use the `SelfQueryRetriever` to convert natural language queries into metadata filters.
|
133 |
+
|
134 |
+
```python
|
135 |
+
metadata_field_info = schema_for_metadata
|
136 |
+
document_content_description = "Brief summary of a movie"
|
137 |
+
llm = ChatOpenAI(temperature=0)
|
138 |
+
retriever = SelfQueryRetriever.from_llm(
|
139 |
+
llm,
|
140 |
+
vectorstore,
|
141 |
+
document_content_description,
|
142 |
+
metadata_field_info,
|
143 |
+
)
|
144 |
+
```
|
145 |
+
|
146 |
+
:::info[Further reading]
|
147 |
+
|
148 |
+
* See our tutorials on [text-to-SQL](/docs/tutorials/sql_qa/), [text-to-Cypher](/docs/tutorials/graph/), and [query analysis for metadata filters](/docs/tutorials/rag/#query-analysis).
|
149 |
+
* See our [blog post overview](https://blog.langchain.dev/query-construction/).
|
150 |
+
* See our RAG from Scratch video on [query construction](https://youtu.be/kl6NwWYxvbM?feature=shared).
|
151 |
+
|
152 |
+
:::
|
153 |
+
|
154 |
+
## Information retrieval
|
155 |
+
|
156 |
+
### Common retrieval systems
|
157 |
+
|
158 |
+
#### Lexical search indexes
|
159 |
+
|
160 |
+
Many search engines are based upon matching words in a query to the words in each document.
|
161 |
+
This approach is called lexical retrieval, using search [algorithms that are typically based upon word frequencies](https://cameronrwolfe.substack.com/p/the-basics-of-ai-powered-vector-search?utm_source=profile&utm_medium=reader2).
|
162 |
+
The intution is simple: a word appears frequently both in the user’s query and a particular document, then this document might be a good match.
|
163 |
+
|
164 |
+
The particular data structure used to implement this is often an [*inverted index*](https://www.geeksforgeeks.org/inverted-index/).
|
165 |
+
This types of index contains a list of words and a mapping of each word to a list of locations at which it occurs in various documents.
|
166 |
+
Using this data structure, it is possible to efficiently match the words in search queries to the documents in which they appear.
|
167 |
+
[BM25](https://en.wikipedia.org/wiki/Okapi_BM25#:~:text=BM25%20is%20a%20bag%2Dof,slightly%20different%20components%20and%20parameters.) and [TF-IDF](https://en.wikipedia.org/wiki/Tf%E2%80%93idf) are [two popular lexical search algorithms](https://cameronrwolfe.substack.com/p/the-basics-of-ai-powered-vector-search?utm_source=profile&utm_medium=reader2).
|
168 |
+
|
169 |
+
:::info[Further reading]
|
170 |
+
|
171 |
+
* See the [BM25](/docs/integrations/retrievers/bm25/) retriever integration.
|
172 |
+
* See the [Elasticsearch](/docs/integrations/retrievers/elasticsearch_retriever/) retriever integration.
|
173 |
+
|
174 |
+
:::
|
175 |
+
|
176 |
+
#### Vector indexes
|
177 |
+
|
178 |
+
Vector indexes are an alternative way to index and store unstructured data.
|
179 |
+
See our conceptual guide on [vectorstores](/docs/concepts/vectorstores/) for a detailed overview.
|
180 |
+
In short, rather than using word frequencies, vectorstores use an [embedding model](/docs/concepts/embedding_models/) to compress documents into high-dimensional vector representation.
|
181 |
+
This allows for efficient similarity search over embedding vectors using simple mathematical operations like cosine similarity.
|
182 |
+
|
183 |
+
:::info[Further reading]
|
184 |
+
|
185 |
+
* See our [how-to guide](/docs/how_to/vectorstore_retriever/) for more details on working with vectorstores.
|
186 |
+
* See our [list of vectorstore integrations](/docs/integrations/vectorstores/).
|
187 |
+
* See Cameron Wolfe's [blog post](https://cameronrwolfe.substack.com/p/the-basics-of-ai-powered-vector-search?utm_source=profile&utm_medium=reader2) on the basics of vector search.
|
188 |
+
|
189 |
+
:::
|
190 |
+
|
191 |
+
#### Relational databases
|
192 |
+
|
193 |
+
Relational databases are a fundamental type of structured data storage used in many applications.
|
194 |
+
They organize data into tables with predefined schemas, where each table represents an entity or relationship.
|
195 |
+
Data is stored in rows (records) and columns (attributes), allowing for efficient querying and manipulation through SQL (Structured Query Language).
|
196 |
+
Relational databases excel at maintaining data integrity, supporting complex queries, and handling relationships between different data entities.
|
197 |
+
|
198 |
+
:::info[Further reading]
|
199 |
+
|
200 |
+
* See our [tutorial](/docs/tutorials/sql_qa/) for working with SQL databases.
|
201 |
+
* See our [SQL database toolkit](/docs/integrations/tools/sql_database/).
|
202 |
+
|
203 |
+
:::
|
204 |
+
|
205 |
+
#### Graph databases
|
206 |
+
|
207 |
+
Graph databases are a specialized type of database designed to store and manage highly interconnected data.
|
208 |
+
Unlike traditional relational databases, graph databases use a flexible structure consisting of nodes (entities), edges (relationships), and properties.
|
209 |
+
This structure allows for efficient representation and querying of complex, interconnected data.
|
210 |
+
Graph databases store data in a graph structure, with nodes, edges, and properties.
|
211 |
+
They are particularly useful for storing and querying complex relationships between data points, such as social networks, supply-chain management, fraud detection, and recommendation services
|
212 |
+
|
213 |
+
:::info[Further reading]
|
214 |
+
|
215 |
+
* See our [tutorial](/docs/tutorials/graph/) for working with graph databases.
|
216 |
+
* See our [list of graph database integrations](/docs/integrations/graphs/).
|
217 |
+
* See Neo4j's [starter kit for LangChain](https://neo4j.com/developer-blog/langchain-neo4j-starter-kit/).
|
218 |
+
|
219 |
+
:::
|
220 |
+
|
221 |
+
### Retriever
|
222 |
+
|
223 |
+
LangChain provides a unified interface for interacting with various retrieval systems through the [retriever](/docs/concepts/retrievers/) concept. The interface is straightforward:
|
224 |
+
|
225 |
+
1. Input: A query (string)
|
226 |
+
2. Output: A list of documents (standardized LangChain [Document](https://python.langchain.com/api_reference/core/documents/langchain_core.documents.base.Document.html) objects)
|
227 |
+
|
228 |
+
You can create a retriever using any of the retrieval systems mentioned earlier. The query analysis techniques we discussed are particularly useful here, as they enable natural language interfaces for databases that typically require structured query languages.
|
229 |
+
For example, you can build a retriever for a SQL database using text-to-SQL conversion. This allows a natural language query (string) to be transformed into a SQL query behind the scenes.
|
230 |
+
Regardless of the underlying retrieval system, all retrievers in LangChain share a common interface. You can use them with the simple `invoke` method:
|
231 |
+
|
232 |
+
|
233 |
+
```python
|
234 |
+
docs = retriever.invoke(query)
|
235 |
+
```
|
236 |
+
|
237 |
+
:::info[Further reading]
|
238 |
+
|
239 |
+
* See our [conceptual guide on retrievers](/docs/concepts/retrievers/).
|
240 |
+
* See our [how-to guide](/docs/how_to/#retrievers) on working with retrievers.
|
241 |
+
|
242 |
+
:::
|
langchain_md_files/concepts/retrievers.mdx
ADDED
@@ -0,0 +1,145 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Retrievers
|
2 |
+
|
3 |
+
<span data-heading-keywords="retriever,retrievers"></span>
|
4 |
+
|
5 |
+
:::info[Prerequisites]
|
6 |
+
|
7 |
+
* [Vector stores](/docs/concepts/vectorstores/)
|
8 |
+
* [Embeddings](/docs/concepts/embedding_models/)
|
9 |
+
* [Text splitters](/docs/concepts/text_splitters/)
|
10 |
+
|
11 |
+
:::
|
12 |
+
|
13 |
+
## Overview
|
14 |
+
|
15 |
+
Many different types of retrieval systems exist, including vectorstores, graph databases, and relational databases.
|
16 |
+
With the rise on popularity of large language models, retrieval systems have become an important component in AI application (e.g., [RAG](/docs/concepts/rag/)).
|
17 |
+
Because of their importance and variability, LangChain provides a uniform interface for interacting with different types of retrieval systems.
|
18 |
+
The LangChain [retriever](/docs/concepts/retrievers/) interface is straightforward:
|
19 |
+
|
20 |
+
1. Input: A query (string)
|
21 |
+
2. Output: A list of documents (standardized LangChain [Document](https://python.langchain.com/api_reference/core/documents/langchain_core.documents.base.Document.html) objects)
|
22 |
+
|
23 |
+
## Key concept
|
24 |
+
|
25 |
+

|
26 |
+
|
27 |
+
All retrievers implement a simple interface for retrieving documents using natural language queries.
|
28 |
+
|
29 |
+
## Interface
|
30 |
+
|
31 |
+
The only requirement for a retriever is the ability to accepts a query and return documents.
|
32 |
+
In particular, [LangChain's retriever class](https://python.langchain.com/api_reference/core/retrievers/langchain_core.retrievers.BaseRetriever.html#) only requires that the `_get_relevant_documents` method is implemented, which takes a `query: str` and returns a list of [Document](https://python.langchain.com/api_reference/core/documents/langchain_core.documents.base.Document.html) objects that are most relevant to the query.
|
33 |
+
The underlying logic used to get relevant documents is specified by the retriever and can be whatever is most useful for the application.
|
34 |
+
|
35 |
+
A LangChain retriever is a [runnable](/docs/how_to/lcel_cheatsheet/), which is a standard interface is for LangChain components.
|
36 |
+
This means that it has a few common methods, including `invoke`, that are used to interact with it. A retriever can be invoked with a query:
|
37 |
+
|
38 |
+
```python
|
39 |
+
docs = retriever.invoke(query)
|
40 |
+
```
|
41 |
+
|
42 |
+
Retrievers return a list of [Document](https://python.langchain.com/api_reference/core/documents/langchain_core.documents.base.Document.html) objects, which have two attributes:
|
43 |
+
|
44 |
+
* `page_content`: The content of this document. Currently is a string.
|
45 |
+
* `metadata`: Arbitrary metadata associated with this document (e.g., document id, file name, source, etc).
|
46 |
+
|
47 |
+
:::info[Further reading]
|
48 |
+
|
49 |
+
* See our [how-to guide](/docs/how_to/custom_retriever/) on building your own custom retriever.
|
50 |
+
|
51 |
+
:::
|
52 |
+
|
53 |
+
## Common types
|
54 |
+
|
55 |
+
Despite the flexibility of the retriever interface, a few common types of retrieval systems are frequently used.
|
56 |
+
|
57 |
+
### Search apis
|
58 |
+
|
59 |
+
It's important to note that retrievers don't need to actually *store* documents.
|
60 |
+
For example, we can be built retrievers on top of search APIs that simply return search results!
|
61 |
+
See our retriever integrations with [Amazon Kendra](/docs/integrations/retrievers/amazon_kendra_retriever/) or [Wikipedia Search](/docs/integrations/retrievers/wikipedia/).
|
62 |
+
|
63 |
+
### Relational or graph database
|
64 |
+
|
65 |
+
Retrievers can be built on top of relational or graph databases.
|
66 |
+
In these cases, [query analysis](/docs/concepts/retrieval/) techniques to construct a structured query from natural language is critical.
|
67 |
+
For example, you can build a retriever for a SQL database using text-to-SQL conversion. This allows a natural language query (string) retriever to be transformed into a SQL query behind the scenes.
|
68 |
+
|
69 |
+
:::info[Further reading]
|
70 |
+
|
71 |
+
* See our [tutorial](/docs/tutorials/sql_qa/) for context on how to build a retreiver using a SQL database and text-to-SQL.
|
72 |
+
* See our [tutorial](/docs/tutorials/graph/) for context on how to build a retreiver using a graph database and text-to-Cypher.
|
73 |
+
|
74 |
+
:::
|
75 |
+
|
76 |
+
### Lexical search
|
77 |
+
|
78 |
+
As discussed in our conceptual review of [retrieval](/docs/concepts/retrieval/), many search engines are based upon matching words in a query to the words in each document.
|
79 |
+
[BM25](https://en.wikipedia.org/wiki/Okapi_BM25#:~:text=BM25%20is%20a%20bag%2Dof,slightly%20different%20components%20and%20parameters.) and [TF-IDF](https://en.wikipedia.org/wiki/Tf%E2%80%93idf) are [two popular lexical search algorithms](https://cameronrwolfe.substack.com/p/the-basics-of-ai-powered-vector-search?utm_source=profile&utm_medium=reader2).
|
80 |
+
LangChain has retrievers for many popular lexical search algorithms / engines.
|
81 |
+
|
82 |
+
:::info[Further reading]
|
83 |
+
|
84 |
+
* See the [BM25](/docs/integrations/retrievers/bm25/) retriever integration.
|
85 |
+
* See the [TF-IDF](/docs/integrations/retrievers/tf_idf/) retriever integration.
|
86 |
+
* See the [Elasticsearch](/docs/integrations/retrievers/elasticsearch_retriever/) retriever integration.
|
87 |
+
|
88 |
+
:::
|
89 |
+
|
90 |
+
### Vector store
|
91 |
+
|
92 |
+
[Vector stores](/docs/concepts/vectorstores/) are a powerful and efficient way to index and retrieve unstructured data.
|
93 |
+
A vectorstore can be used as a retriever by calling the `as_retriever()` method.
|
94 |
+
|
95 |
+
```python
|
96 |
+
vectorstore = MyVectorStore()
|
97 |
+
retriever = vectorstore.as_retriever()
|
98 |
+
```
|
99 |
+
|
100 |
+
## Advanced retrieval patterns
|
101 |
+
|
102 |
+
### Ensemble
|
103 |
+
|
104 |
+
Because the retriever interface is so simple, returning a list of `Document` objects given a search query, it is possible to combine multiple retrievers using ensembling.
|
105 |
+
This is particularly useful when you have multiple retrievers that are good at finding different types of relevant documents.
|
106 |
+
It is easy to create an [ensemble retriever](/docs/how_to/ensemble_retriever/) that combines multiple retrievers with linear weighted scores:
|
107 |
+
|
108 |
+
```python
|
109 |
+
# Initialize the ensemble retriever
|
110 |
+
ensemble_retriever = EnsembleRetriever(
|
111 |
+
retrievers=[bm25_retriever, vector_store_retriever], weights=[0.5, 0.5]
|
112 |
+
)
|
113 |
+
```
|
114 |
+
|
115 |
+
When ensembling, how do we combine search results from many retrievers?
|
116 |
+
This motivates the concept of re-ranking, which takes the output of multiple retrievers and combines them using a more sophisticated algorithm such as [Reciprocal Rank Fusion (RRF)](https://plg.uwaterloo.ca/~gvcormac/cormacksigir09-rrf.pdf).
|
117 |
+
|
118 |
+
### Source document retention
|
119 |
+
|
120 |
+
Many retrievers utilize some kind of index to make documents easily searchable.
|
121 |
+
The process of indexing can include a transformation step (e.g., vectorstores often use document splitting).
|
122 |
+
Whatever transformation is used, can be very useful to retain a link between the *transformed document* and the original, giving the retriever the ability to return the *original* document.
|
123 |
+
|
124 |
+

|
125 |
+
|
126 |
+
This is particularly useful in AI applications, because it ensures no loss in document context for the model.
|
127 |
+
For example, you may use small chunk size for indexing documents in a vectorstore.
|
128 |
+
If you return *only* the chunks as the retrieval result, then the model will have lost the original document context for the chunks.
|
129 |
+
|
130 |
+
LangChain has two different retrievers that can be used to address this challenge.
|
131 |
+
The [Multi-Vector](/docs/how_to/multi_vector/) retriever allows the user to use any document transformation (e.g., use an LLM to write a summary of the document) for indexing while retaining linkage to the source document.
|
132 |
+
The [ParentDocument](/docs/how_to/parent_document_retriever/) retriever links document chunks from a text-splitter transformation for indexing while retaining linkage to the source document.
|
133 |
+
|
134 |
+
| Name | Index Type | Uses an LLM | When to Use | Description |
|
135 |
+
|-----------------------------------------------------------|-------------------------------|---------------------------|-----------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
136 |
+
| [ParentDocument](/docs/how_to/parent_document_retriever/) | Vector store + Document Store | No | If your pages have lots of smaller pieces of distinct information that are best indexed by themselves, but best retrieved all together. | This involves indexing multiple chunks for each document. Then you find the chunks that are most similar in embedding space, but you retrieve the whole parent document and return that (rather than individual chunks). |
|
137 |
+
| [Multi Vector](/docs/how_to/multi_vector/) | Vector store + Document Store | Sometimes during indexing | If you are able to extract information from documents that you think is more relevant to index than the text itself. | This involves creating multiple vectors for each document. Each vector could be created in a myriad of ways - examples include summaries of the text and hypothetical questions. |
|
138 |
+
|
139 |
+
:::info[Further reading]
|
140 |
+
|
141 |
+
* See our [how-to guide](/docs/how_to/parent_document_retriever/) on using the ParentDocument retriever.
|
142 |
+
* See our [how-to guide](/docs/how_to/multi_vector/) on using the MultiVector retriever.
|
143 |
+
* See our RAG from Scratch video on the [multi vector retriever](https://youtu.be/gTCU9I6QqCE?feature=shared).
|
144 |
+
|
145 |
+
:::
|
langchain_md_files/concepts/runnables.mdx
ADDED
@@ -0,0 +1,352 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Runnable interface
|
2 |
+
|
3 |
+
The Runnable interface is the foundation for working with LangChain components, and it's implemented across many of them, such as [language models](/docs/concepts/chat_models), [output parsers](/docs/concepts/output_parsers), [retrievers](/docs/concepts/retrievers), [compiled LangGraph graphs](
|
4 |
+
https://langchain-ai.github.io/langgraph/concepts/low_level/#compiling-your-graph) and more.
|
5 |
+
|
6 |
+
This guide covers the main concepts and methods of the Runnable interface, which allows developers to interact with various LangChain components in a consistent and predictable manner.
|
7 |
+
|
8 |
+
:::info Related Resources
|
9 |
+
* The ["Runnable" Interface API Reference](https://python.langchain.com/api_reference/core/runnables/langchain_core.runnables.base.Runnable.html#langchain_core.runnables.base.Runnable) provides a detailed overview of the Runnable interface and its methods.
|
10 |
+
* A list of built-in `Runnables` can be found in the [LangChain Core API Reference](https://python.langchain.com/api_reference/core/runnables.html). Many of these Runnables are useful when composing custom "chains" in LangChain using the [LangChain Expression Language (LCEL)](/docs/concepts/lcel).
|
11 |
+
:::
|
12 |
+
|
13 |
+
## Overview of runnable interface
|
14 |
+
|
15 |
+
The Runnable way defines a standard interface that allows a Runnable component to be:
|
16 |
+
|
17 |
+
* [Invoked](/docs/how_to/lcel_cheatsheet/#invoke-a-runnable): A single input is transformed into an output.
|
18 |
+
* [Batched](/docs/how_to/lcel_cheatsheet/#batch-a-runnable): Multiple inputs are efficiently transformed into outputs.
|
19 |
+
* [Streamed](/docs/how_to/lcel_cheatsheet/#stream-a-runnable): Outputs are streamed as they are produced.
|
20 |
+
* Inspected: Schematic information about Runnable's input, output, and configuration can be accessed.
|
21 |
+
* Composed: Multiple Runnables can be composed to work together using [the LangChain Expression Language (LCEL)](/docs/concepts/lcel) to create complex pipelines.
|
22 |
+
|
23 |
+
Please review the [LCEL Cheatsheet](/docs/how_to/lcel_cheatsheet) for some common patterns that involve the Runnable interface and LCEL expressions.
|
24 |
+
|
25 |
+
<a id="batch"></a>
|
26 |
+
### Optimized parallel execution (batch)
|
27 |
+
<span data-heading-keywords="batch"></span>
|
28 |
+
|
29 |
+
LangChain Runnables offer a built-in `batch` (and `batch_as_completed`) API that allow you to process multiple inputs in parallel.
|
30 |
+
|
31 |
+
Using these methods can significantly improve performance when needing to process multiple independent inputs, as the
|
32 |
+
processing can be done in parallel instead of sequentially.
|
33 |
+
|
34 |
+
The two batching options are:
|
35 |
+
|
36 |
+
* `batch`: Process multiple inputs in parallel, returning results in the same order as the inputs.
|
37 |
+
* `batch_as_completed`: Process multiple inputs in parallel, returning results as they complete. Results may arrive out of order, but each includes the input index for matching.
|
38 |
+
|
39 |
+
The default implementation of `batch` and `batch_as_completed` use a thread pool executor to run the `invoke` method in parallel. This allows for efficient parallel execution without the need for users to manage threads, and speeds up code that is I/O-bound (e.g., making API requests, reading files, etc.). It will not be as effective for CPU-bound operations, as the GIL (Global Interpreter Lock) in Python will prevent true parallel execution.
|
40 |
+
|
41 |
+
Some Runnables may provide their own implementations of `batch` and `batch_as_completed` that are optimized for their specific use case (e.g.,
|
42 |
+
rely on a `batch` API provided by a model provider).
|
43 |
+
|
44 |
+
:::note
|
45 |
+
The async versions of `abatch` and `abatch_as_completed` relies on asyncio's [gather](https://docs.python.org/3/library/asyncio-task.html#asyncio.gather) and [as_completed](https://docs.python.org/3/library/asyncio-task.html#asyncio.as_completed) functions to run the `ainvoke` method in parallel.
|
46 |
+
:::
|
47 |
+
|
48 |
+
:::tip
|
49 |
+
When processing a large number of inputs using `batch` or `batch_as_completed`, users may want to control the maximum number of parallel calls. This can be done by setting the `max_concurrency` attribute in the `RunnableConfig` dictionary. See the [RunnableConfig](/docs/concepts/runnables/#runnableconfig) for more information.
|
50 |
+
|
51 |
+
Chat Models also have a built-in [rate limiter](/docs/concepts/chat_models#rate-limiting) that can be used to control the rate at which requests are made.
|
52 |
+
:::
|
53 |
+
|
54 |
+
### Asynchronous support
|
55 |
+
<span data-heading-keywords="async-api"></span>
|
56 |
+
|
57 |
+
Runnables expose an asynchronous API, allowing them to be called using the `await` syntax in Python. Asynchronous methods can be identified by the "a" prefix (e.g., `ainvoke`, `abatch`, `astream`, `abatch_as_completed`).
|
58 |
+
|
59 |
+
Please refer to the [Async Programming with LangChain](/docs/concepts/async) guide for more details.
|
60 |
+
|
61 |
+
## Streaming APIs
|
62 |
+
<span data-heading-keywords="streaming-api"></span>
|
63 |
+
|
64 |
+
Streaming is critical in making applications based on LLMs feel responsive to end-users.
|
65 |
+
|
66 |
+
Runnables expose the following three streaming APIs:
|
67 |
+
|
68 |
+
1. sync [stream](https://python.langchain.com/api_reference/core/runnables/langchain_core.runnables.base.Runnable.html#langchain_core.runnables.base.Runnable.stream) and async [astream](https://python.langchain.com/api_reference/core/runnables/langchain_core.runnables.base.Runnable.html#langchain_core.runnables.base.Runnable.astream): yields the output a Runnable as it is generated.
|
69 |
+
2. The async `astream_events`: a more advanced streaming API that allows streaming intermediate steps and final output
|
70 |
+
3. The **legacy** async `astream_log`: a legacy streaming API that streams intermediate steps and final output
|
71 |
+
|
72 |
+
Please refer to the [Streaming Conceptual Guide](/docs/concepts/streaming) for more details on how to stream in LangChain.
|
73 |
+
|
74 |
+
## Input and output types
|
75 |
+
|
76 |
+
Every `Runnable` is characterized by an input and output type. These input and output types can be any Python object, and are defined by the Runnable itself.
|
77 |
+
|
78 |
+
Runnable methods that result in the execution of the Runnable (e.g., `invoke`, `batch`, `stream`, `astream_events`) work with these input and output types.
|
79 |
+
|
80 |
+
* invoke: Accepts an input and returns an output.
|
81 |
+
* batch: Accepts a list of inputs and returns a list of outputs.
|
82 |
+
* stream: Accepts an input and returns a generator that yields outputs.
|
83 |
+
|
84 |
+
The **input type** and **output type** vary by component:
|
85 |
+
|
86 |
+
| Component | Input Type | Output Type |
|
87 |
+
|--------------|--------------------------------------------------|-----------------------|
|
88 |
+
| Prompt | dictionary | PromptValue |
|
89 |
+
| ChatModel | a string, list of chat messages or a PromptValue | ChatMessage |
|
90 |
+
| LLM | a string, list of chat messages or a PromptValue | String |
|
91 |
+
| OutputParser | the output of an LLM or ChatModel | Depends on the parser |
|
92 |
+
| Retriever | a string | List of Documents |
|
93 |
+
| Tool | a string or dictionary, depending on the tool | Depends on the tool |
|
94 |
+
|
95 |
+
Please refer to the individual component documentation for more information on the input and output types and how to use them.
|
96 |
+
|
97 |
+
### Inspecting schemas
|
98 |
+
|
99 |
+
:::note
|
100 |
+
This is an advanced feature that is unnecessary for most users. You should probably
|
101 |
+
skip this section unless you have a specific need to inspect the schema of a Runnable.
|
102 |
+
:::
|
103 |
+
|
104 |
+
In more advanced use cases, you may want to programmatically **inspect** the Runnable and determine what input and output types the Runnable expects and produces.
|
105 |
+
|
106 |
+
The Runnable interface provides methods to get the [JSON Schema](https://json-schema.org/) of the input and output types of a Runnable, as well as [Pydantic schemas](https://docs.pydantic.dev/latest/) for the input and output types.
|
107 |
+
|
108 |
+
These APIs are mostly used internally for unit-testing and by [LangServe](/docs/concepts/architecture#langserve) which uses the APIs for input validation and generation of [OpenAPI documentation](https://www.openapis.org/).
|
109 |
+
|
110 |
+
In addition, to the input and output types, some Runnables have been set up with additional run time configuration options.
|
111 |
+
There are corresponding APIs to get the Pydantic Schema and JSON Schema of the configuration options for the Runnable.
|
112 |
+
Please see the [Configurable Runnables](#configurable-runnables) section for more information.
|
113 |
+
|
114 |
+
| Method | Description |
|
115 |
+
|-------------------------|------------------------------------------------------------------|
|
116 |
+
| `get_input_schema` | Gives the Pydantic Schema of the input schema for the Runnable. |
|
117 |
+
| `get_output_schema` | Gives the Pydantic Schema of the output schema for the Runnable. |
|
118 |
+
| `config_schema` | Gives the Pydantic Schema of the config schema for the Runnable. |
|
119 |
+
| `get_input_jsonschema` | Gives the JSONSchema of the input schema for the Runnable. |
|
120 |
+
| `get_output_jsonschema` | Gives the JSONSchema of the output schema for the Runnable. |
|
121 |
+
| `get_config_jsonschema` | Gives the JSONSchema of the config schema for the Runnable. |
|
122 |
+
|
123 |
+
|
124 |
+
#### With_types
|
125 |
+
|
126 |
+
LangChain will automatically try to infer the input and output types of a Runnable based on available information.
|
127 |
+
|
128 |
+
Currently, this inference does not work well for more complex Runnables that are built using [LCEL](/docs/concepts/lcel) composition, and the inferred input and / or output types may be incorrect. In these cases, we recommend that users override the inferred input and output types using the `with_types` method ([API Reference](https://python.langchain.com/api_reference/core/runnables/langchain_core.runnables.base.Runnable.html#langchain_core.runnables.base.Runnable.with_types
|
129 |
+
).
|
130 |
+
|
131 |
+
## RunnableConfig
|
132 |
+
|
133 |
+
Any of the methods that are used to execute the runnable (e.g., `invoke`, `batch`, `stream`, `astream_events`) accept a second argument called
|
134 |
+
`RunnableConfig` ([API Reference](https://python.langchain.com/api_reference/core/runnables/langchain_core.runnables.config.RunnableConfig.html#RunnableConfig)). This argument is a dictionary that contains configuration for the Runnable that will be used
|
135 |
+
at run time during the execution of the runnable.
|
136 |
+
|
137 |
+
A `RunnableConfig` can have any of the following properties defined:
|
138 |
+
|
139 |
+
| Attribute | Description |
|
140 |
+
|-----------------|--------------------------------------------------------------------------------------------|
|
141 |
+
| run_name | Name used for the given Runnable (not inherited). |
|
142 |
+
| run_id | Unique identifier for this call. sub-calls will get their own unique run ids. |
|
143 |
+
| tags | Tags for this call and any sub-calls. |
|
144 |
+
| metadata | Metadata for this call and any sub-calls. |
|
145 |
+
| callbacks | Callbacks for this call and any sub-calls. |
|
146 |
+
| max_concurrency | Maximum number of parallel calls to make (e.g., used by batch). |
|
147 |
+
| recursion_limit | Maximum number of times a call can recurse (e.g., used by Runnables that return Runnables) |
|
148 |
+
| configurable | Runtime values for configurable attributes of the Runnable. |
|
149 |
+
|
150 |
+
Passing `config` to the `invoke` method is done like so:
|
151 |
+
|
152 |
+
```python
|
153 |
+
some_runnable.invoke(
|
154 |
+
some_input,
|
155 |
+
config={
|
156 |
+
'run_name': 'my_run',
|
157 |
+
'tags': ['tag1', 'tag2'],
|
158 |
+
'metadata': {'key': 'value'}
|
159 |
+
|
160 |
+
}
|
161 |
+
)
|
162 |
+
```
|
163 |
+
|
164 |
+
### Propagation of RunnableConfig
|
165 |
+
|
166 |
+
Many `Runnables` are composed of other Runnables, and it is important that the `RunnableConfig` is propagated to all sub-calls made by the Runnable. This allows providing run time configuration values to the parent Runnable that are inherited by all sub-calls.
|
167 |
+
|
168 |
+
If this were not the case, it would be impossible to set and propagate [callbacks](/docs/concepts/callbacks) or other configuration values like `tags` and `metadata` which
|
169 |
+
are expected to be inherited by all sub-calls.
|
170 |
+
|
171 |
+
There are two main patterns by which new `Runnables` are created:
|
172 |
+
|
173 |
+
1. Declaratively using [LangChain Expression Language (LCEL)](/docs/concepts/lcel):
|
174 |
+
|
175 |
+
```python
|
176 |
+
chain = prompt | chat_model | output_parser
|
177 |
+
```
|
178 |
+
|
179 |
+
2. Using a [custom Runnable](#custom-runnables) (e.g., `RunnableLambda`) or using the `@tool` decorator:
|
180 |
+
|
181 |
+
```python
|
182 |
+
def foo(input):
|
183 |
+
# Note that .invoke() is used directly here
|
184 |
+
return bar_runnable.invoke(input)
|
185 |
+
foo_runnable = RunnableLambda(foo)
|
186 |
+
```
|
187 |
+
|
188 |
+
LangChain will try to propagate `RunnableConfig` automatically for both of the patterns.
|
189 |
+
|
190 |
+
For handling the second pattern, LangChain relies on Python's [contextvars](https://docs.python.org/3/library/contextvars.html).
|
191 |
+
|
192 |
+
In Python 3.11 and above, this works out of the box, and you do not need to do anything special to propagate the `RunnableConfig` to the sub-calls.
|
193 |
+
|
194 |
+
In Python 3.9 and 3.10, if you are using **async code**, you need to manually pass the `RunnableConfig` through to the `Runnable` when invoking it.
|
195 |
+
|
196 |
+
This is due to a limitation in [asyncio's tasks](https://docs.python.org/3/library/asyncio-task.html#asyncio.create_task) in Python 3.9 and 3.10 which did
|
197 |
+
not accept a `context` argument).
|
198 |
+
|
199 |
+
Propagating the `RunnableConfig` manually is done like so:
|
200 |
+
|
201 |
+
```python
|
202 |
+
async def foo(input, config): # <-- Note the config argument
|
203 |
+
return await bar_runnable.ainvoke(input, config=config)
|
204 |
+
|
205 |
+
foo_runnable = RunnableLambda(foo)
|
206 |
+
```
|
207 |
+
|
208 |
+
:::caution
|
209 |
+
When using Python 3.10 or lower and writing async code, `RunnableConfig` cannot be propagated
|
210 |
+
automatically, and you will need to do it manually! This is a common pitfall when
|
211 |
+
attempting to stream data using `astream_events` and `astream_log` as these methods
|
212 |
+
rely on proper propagation of [callbacks](/docs/concepts/callbacks) defined inside of `RunnableConfig`.
|
213 |
+
:::
|
214 |
+
|
215 |
+
### Setting custom run name, tags, and metadata
|
216 |
+
|
217 |
+
The `run_name`, `tags`, and `metadata` attributes of the `RunnableConfig` dictionary can be used to set custom values for the run name, tags, and metadata for a given Runnable.
|
218 |
+
|
219 |
+
The `run_name` is a string that can be used to set a custom name for the run. This name will be used in logs and other places to identify the run. It is not inherited by sub-calls.
|
220 |
+
|
221 |
+
The `tags` and `metadata` attributes are lists and dictionaries, respectively, that can be used to set custom tags and metadata for the run. These values are inherited by sub-calls.
|
222 |
+
|
223 |
+
Using these attributes can be useful for tracking and debugging runs, as they will be surfaced in [LangSmith](https://docs.smith.langchain.com/) as trace attributes that you can
|
224 |
+
filter and search on.
|
225 |
+
|
226 |
+
The attributes will also be propagated to [callbacks](/docs/concepts/callbacks), and will appear in streaming APIs like [astream_events](/docs/concepts/streaming) as part of each event in the stream.
|
227 |
+
|
228 |
+
:::note Related
|
229 |
+
* [How-to trace with LangChain](https://docs.smith.langchain.com/how_to_guides/tracing/trace_with_langchain)
|
230 |
+
:::
|
231 |
+
|
232 |
+
### Setting run id
|
233 |
+
|
234 |
+
:::note
|
235 |
+
This is an advanced feature that is unnecessary for most users.
|
236 |
+
:::
|
237 |
+
|
238 |
+
You may need to set a custom `run_id` for a given run, in case you want
|
239 |
+
to reference it later or correlate it with other systems.
|
240 |
+
|
241 |
+
The `run_id` MUST be a valid UUID string and **unique** for each run. It is used to identify
|
242 |
+
the parent run, sub-class will get their own unique run ids automatically.
|
243 |
+
|
244 |
+
To set a custom `run_id`, you can pass it as a key-value pair in the `config` dictionary when invoking the Runnable:
|
245 |
+
|
246 |
+
```python
|
247 |
+
import uuid
|
248 |
+
|
249 |
+
run_id = uuid.uuid4()
|
250 |
+
|
251 |
+
some_runnable.invoke(
|
252 |
+
some_input,
|
253 |
+
config={
|
254 |
+
'run_id': run_id
|
255 |
+
}
|
256 |
+
)
|
257 |
+
|
258 |
+
# Do something with the run_id
|
259 |
+
```
|
260 |
+
|
261 |
+
### Setting recursion limit
|
262 |
+
|
263 |
+
:::note
|
264 |
+
This is an advanced feature that is unnecessary for most users.
|
265 |
+
:::
|
266 |
+
|
267 |
+
Some Runnables may return other Runnables, which can lead to infinite recursion if not handled properly. To prevent this, you can set a `recursion_limit` in the `RunnableConfig` dictionary. This will limit the number of times a Runnable can recurse.
|
268 |
+
|
269 |
+
### Setting max concurrency
|
270 |
+
|
271 |
+
If using the `batch` or `batch_as_completed` methods, you can set the `max_concurrency` attribute in the `RunnableConfig` dictionary to control the maximum number of parallel calls to make. This can be useful when you want to limit the number of parallel calls to prevent overloading a server or API.
|
272 |
+
|
273 |
+
|
274 |
+
:::tip
|
275 |
+
If you're trying to rate limit the number of requests made by a **Chat Model**, you can use the built-in [rate limiter](/docs/concepts/chat_models#rate-limiting) instead of setting `max_concurrency`, which will be more effective.
|
276 |
+
|
277 |
+
See the [How to handle rate limits](/docs/how_to/chat_model_rate_limiting/) guide for more information.
|
278 |
+
:::
|
279 |
+
|
280 |
+
### Setting configurable
|
281 |
+
|
282 |
+
The `configurable` field is used to pass runtime values for configurable attributes of the Runnable.
|
283 |
+
|
284 |
+
It is used frequently in [LangGraph](/docs/concepts/architecture#langgraph) with
|
285 |
+
[LangGraph Persistence](https://langchain-ai.github.io/langgraph/concepts/persistence/)
|
286 |
+
and [memory](https://langchain-ai.github.io/langgraph/concepts/memory/).
|
287 |
+
|
288 |
+
It is used for a similar purpose in [RunnableWithMessageHistory](https://python.langchain.com/api_reference/core/runnables/langchain_core.runnables.history.RunnableWithMessageHistory.html#langchain_core.runnables.history.RunnableWithMessageHistory) to specify either
|
289 |
+
a `session_id` / `conversation_id` to keep track of conversation history.
|
290 |
+
|
291 |
+
In addition, you can use it to specify any custom configuration options to pass to any [Configurable Runnable](#configurable-runnables) that they create.
|
292 |
+
|
293 |
+
### Setting callbacks
|
294 |
+
|
295 |
+
Use this option to configure [callbacks](/docs/concepts/callbacks) for the runnable at
|
296 |
+
runtime. The callbacks will be passed to all sub-calls made by the runnable.
|
297 |
+
|
298 |
+
```python
|
299 |
+
some_runnable.invoke(
|
300 |
+
some_input,
|
301 |
+
{
|
302 |
+
"callbacks": [
|
303 |
+
SomeCallbackHandler(),
|
304 |
+
AnotherCallbackHandler(),
|
305 |
+
]
|
306 |
+
}
|
307 |
+
)
|
308 |
+
```
|
309 |
+
|
310 |
+
Please read the [Callbacks Conceptual Guide](/docs/concepts/callbacks) for more information on how to use callbacks in LangChain.
|
311 |
+
|
312 |
+
:::important
|
313 |
+
If you're using Python 3.9 or 3.10 in an async environment, you must propagate
|
314 |
+
the `RunnableConfig` manually to sub-calls in some cases. Please see the
|
315 |
+
[Propagating RunnableConfig](#propagation-of-runnableconfig) section for more information.
|
316 |
+
:::
|
317 |
+
|
318 |
+
## Creating a runnable from a function {#custom-runnables}
|
319 |
+
|
320 |
+
You may need to create a custom Runnable that runs arbitrary logic. This is especially
|
321 |
+
useful if using [LangChain Expression Language (LCEL)](/docs/concepts/lcel) to compose
|
322 |
+
multiple Runnables and you need to add custom processing logic in one of the steps.
|
323 |
+
|
324 |
+
There are two ways to create a custom Runnable from a function:
|
325 |
+
|
326 |
+
* `RunnableLambda`: Use this for simple transformations where streaming is not required.
|
327 |
+
* `RunnableGenerator`: use this for more complex transformations when streaming is needed.
|
328 |
+
|
329 |
+
See the [How to run custom functions](/docs/how_to/functions) guide for more information on how to use `RunnableLambda` and `RunnableGenerator`.
|
330 |
+
|
331 |
+
:::important
|
332 |
+
Users should not try to subclass Runnables to create a new custom Runnable. It is
|
333 |
+
much more complex and error-prone than simply using `RunnableLambda` or `RunnableGenerator`.
|
334 |
+
:::
|
335 |
+
|
336 |
+
## Configurable runnables
|
337 |
+
|
338 |
+
:::note
|
339 |
+
This is an advanced feature that is unnecessary for most users.
|
340 |
+
|
341 |
+
It helps with configuration of large "chains" created using the [LangChain Expression Language (LCEL)](/docs/concepts/lcel)
|
342 |
+
and is leveraged by [LangServe](/docs/concepts/architecture#langserve) for deployed Runnables.
|
343 |
+
:::
|
344 |
+
|
345 |
+
Sometimes you may want to experiment with, or even expose to the end user, multiple different ways of doing things with your Runnable. This could involve adjusting parameters like the temperature in a chat model or even switching between different chat models.
|
346 |
+
|
347 |
+
To simplify this process, the Runnable interface provides two methods for creating configurable Runnables at runtime:
|
348 |
+
|
349 |
+
* `configurable_fields`: This method allows you to configure specific **attributes** in a Runnable. For example, the `temperature` attribute of a chat model.
|
350 |
+
* `configurable_alternatives`: This method enables you to specify **alternative** Runnables that can be run during runtime. For example, you could specify a list of different chat models that can be used.
|
351 |
+
|
352 |
+
See the [How to configure runtime chain internals](/docs/how_to/configure) guide for more information on how to configure runtime chain internals.
|
langchain_md_files/concepts/streaming.mdx
ADDED
@@ -0,0 +1,191 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Streaming
|
2 |
+
|
3 |
+
:::info Prerequisites
|
4 |
+
* [Runnable Interface](/docs/concepts/runnables)
|
5 |
+
* [Chat Models](/docs/concepts/chat_models)
|
6 |
+
:::
|
7 |
+
|
8 |
+
**Streaming** is crucial for enhancing the responsiveness of applications built on [LLMs](/docs/concepts/chat_models). By displaying output progressively, even before a complete response is ready, streaming significantly improves user experience (UX), particularly when dealing with the latency of LLMs.
|
9 |
+
|
10 |
+
## Overview
|
11 |
+
|
12 |
+
Generating full responses from [LLMs](/docs/concepts/chat_models) often incurs a delay of several seconds, which becomes more noticeable in complex applications with multiple model calls. Fortunately, LLMs generate responses iteratively, allowing for intermediate results to be displayed as they are produced. By streaming these intermediate outputs, LangChain enables smoother UX in LLM-powered apps and offers built-in support for streaming at the core of its design.
|
13 |
+
|
14 |
+
In this guide, we'll discuss streaming in LLM applications and explore how LangChain's streaming APIs facilitate real-time output from various components in your application.
|
15 |
+
|
16 |
+
## What to stream in LLM applications
|
17 |
+
|
18 |
+
In applications involving LLMs, several types of data can be streamed to improve user experience by reducing perceived latency and increasing transparency. These include:
|
19 |
+
|
20 |
+
### 1. Streaming LLM outputs
|
21 |
+
|
22 |
+
The most common and critical data to stream is the output generated by the LLM itself. LLMs often take time to generate full responses, and by streaming the output in real-time, users can see partial results as they are produced. This provides immediate feedback and helps reduce the wait time for users.
|
23 |
+
|
24 |
+
### 2. Streaming pipeline or workflow progress
|
25 |
+
|
26 |
+
Beyond just streaming LLM output, it’s useful to stream progress through more complex workflows or pipelines, giving users a sense of how the application is progressing overall. This could include:
|
27 |
+
|
28 |
+
- **In LangGraph Workflows:**
|
29 |
+
With [LangGraph](/docs/concepts/architecture#langgraph), workflows are composed of nodes and edges that represent various steps. Streaming here involves tracking changes to the **graph state** as individual **nodes** request updates. This allows for more granular monitoring of which node in the workflow is currently active, giving real-time updates about the status of the workflow as it progresses through different stages.
|
30 |
+
|
31 |
+
- **In LCEL Pipelines:**
|
32 |
+
Streaming updates from an [LCEL](/docs/concepts/lcel) pipeline involves capturing progress from individual **sub-runnables**. For example, as different steps or components of the pipeline execute, you can stream which sub-runnable is currently running, providing real-time insight into the overall pipeline's progress.
|
33 |
+
|
34 |
+
Streaming pipeline or workflow progress is essential in providing users with a clear picture of where the application is in the execution process.
|
35 |
+
|
36 |
+
### 3. Streaming custom data
|
37 |
+
|
38 |
+
In some cases, you may need to stream **custom data** that goes beyond the information provided by the pipeline or workflow structure. This custom information is injected within a specific step in the workflow, whether that step is a tool or a LangGraph node. For example, you could stream updates about what a tool is doing in real-time or the progress through a LangGraph node. This granular data, which is emitted directly from within the step, provides more detailed insights into the execution of the workflow and is especially useful in complex processes where more visibility is needed.
|
39 |
+
|
40 |
+
## Streaming APIs
|
41 |
+
|
42 |
+
LangChain has two main APIs for streaming output in real-time. These APIs are supported by any component that implements the [Runnable Interface](/docs/concepts/runnables), including [LLMs](/docs/concepts/chat_models), [compiled LangGraph graphs](https://langchain-ai.github.io/langgraph/concepts/low_level/), and any Runnable generated with [LCEL](/docs/concepts/lcel).
|
43 |
+
|
44 |
+
1. sync [stream](https://python.langchain.com/api_reference/core/runnables/langchain_core.runnables.base.Runnable.html#langchain_core.runnables.base.Runnable.stream) and async [astream](https://python.langchain.com/api_reference/core/runnables/langchain_core.runnables.base.Runnable.html#langchain_core.runnables.base.Runnable.astream): Use to stream outputs from individual Runnables (e.g., a chat model) as they are generated or stream any workflow created with LangGraph.
|
45 |
+
2. The async only [astream_events](https://python.langchain.com/api_reference/core/runnables/langchain_core.runnables.base.Runnable.html#langchain_core.runnables.base.Runnable.astream_events): Use this API to get access to custom events and intermediate outputs from LLM applications built entirely with [LCEL](/docs/concepts/lcel). Note that this API is available, but not needed when working with LangGraph.
|
46 |
+
|
47 |
+
:::note
|
48 |
+
In addition, there is a **legacy** async [astream_log](https://python.langchain.com/api_reference/core/runnables/langchain_core.runnables.base.Runnable.html#langchain_core.runnables.base.Runnable.astream_log) API. This API is not recommended for new projects it is more complex and less feature-rich than the other streaming APIs.
|
49 |
+
:::
|
50 |
+
|
51 |
+
### `stream()` and `astream()`
|
52 |
+
|
53 |
+
The `stream()` method returns an iterator that yields chunks of output synchronously as they are produced. You can use a `for` loop to process each chunk in real-time. For example, when using an LLM, this allows the output to be streamed incrementally as it is generated, reducing the wait time for users.
|
54 |
+
|
55 |
+
The type of chunk yielded by the `stream()` and `astream()` methods depends on the component being streamed. For example, when streaming from an [LLM](/docs/concepts/chat_models) each component will be an [AIMessageChunk](/docs/concepts/messages#aimessagechunk); however, for other components, the chunk may be different.
|
56 |
+
|
57 |
+
The `stream()` method returns an iterator that yields these chunks as they are produced. For example,
|
58 |
+
|
59 |
+
```python
|
60 |
+
for chunk in component.stream(some_input):
|
61 |
+
# IMPORTANT: Keep the processing of each chunk as efficient as possible.
|
62 |
+
# While you're processing the current chunk, the upstream component is
|
63 |
+
# waiting to produce the next one. For example, if working with LangGraph,
|
64 |
+
# graph execution is paused while the current chunk is being processed.
|
65 |
+
# In extreme cases, this could even result in timeouts (e.g., when llm outputs are
|
66 |
+
# streamed from an API that has a timeout).
|
67 |
+
print(chunk)
|
68 |
+
```
|
69 |
+
|
70 |
+
The [asynchronous version](/docs/concepts/async), `astream()`, works similarly but is designed for non-blocking workflows. You can use it in asynchronous code to achieve the same real-time streaming behavior.
|
71 |
+
|
72 |
+
#### Usage with chat models
|
73 |
+
|
74 |
+
When using `stream()` or `astream()` with chat models, the output is streamed as [AIMessageChunks](/docs/concepts/messages#aimessagechunk) as it is generated by the LLM. This allows you to present or process the LLM's output incrementally as it's being produced, which is particularly useful in interactive applications or interfaces.
|
75 |
+
|
76 |
+
#### Usage with LangGraph
|
77 |
+
|
78 |
+
[LangGraph](/docs/concepts/architecture#langgraph) compiled graphs are [Runnables](/docs/concepts/runnables) and support the standard streaming APIs.
|
79 |
+
|
80 |
+
When using the *stream* and *astream* methods with LangGraph, you can choose **one or more** [streaming mode](https://langchain-ai.github.io/langgraph/reference/types/#langgraph.types.StreamMode) which allow you to control the type of output that is streamed. The available streaming modes are:
|
81 |
+
|
82 |
+
- **"values"**: Emit all values of the [state](https://langchain-ai.github.io/langgraph/concepts/low_level/) for each step.
|
83 |
+
- **"updates"**: Emit only the node name(s) and updates that were returned by the node(s) after each step.
|
84 |
+
- **"debug"**: Emit debug events for each step.
|
85 |
+
- **"messages"**: Emit LLM [messages](/docs/concepts/messages) [token-by-token](/docs/concepts/tokens).
|
86 |
+
- **"custom"**: Emit custom output written using [LangGraph's StreamWriter](https://langchain-ai.github.io/langgraph/reference/types/#langgraph.types.StreamWriter).
|
87 |
+
|
88 |
+
For more information, please see:
|
89 |
+
* [LangGraph streaming conceptual guide](https://langchain-ai.github.io/langgraph/concepts/streaming/) for more information on how to stream when working with LangGraph.
|
90 |
+
* [LangGraph streaming how-to guides](https://langchain-ai.github.io/langgraph/how-tos/#streaming) for specific examples of streaming in LangGraph.
|
91 |
+
|
92 |
+
#### Usage with LCEL
|
93 |
+
|
94 |
+
If you compose multiple Runnables using [LangChain’s Expression Language (LCEL)](/docs/concepts/lcel), the `stream()` and `astream()` methods will, by convention, stream the output of the last step in the chain. This allows the final processed result to be streamed incrementally. **LCEL** tries to optimize streaming latency in pipelines so that the streaming results from the last step are available as soon as possible.
|
95 |
+
|
96 |
+
|
97 |
+
|
98 |
+
### `astream_events`
|
99 |
+
<span data-heading-keywords="astream_events,stream_events,stream events"></span>
|
100 |
+
|
101 |
+
:::tip
|
102 |
+
Use the `astream_events` API to access custom data and intermediate outputs from LLM applications built entirely with [LCEL](/docs/concepts/lcel).
|
103 |
+
|
104 |
+
While this API is available for use with [LangGraph](/docs/concepts/architecture#langgraph) as well, it is usually not necessary when working with LangGraph, as the `stream` and `astream` methods provide comprehensive streaming capabilities for LangGraph graphs.
|
105 |
+
:::
|
106 |
+
|
107 |
+
For chains constructed using **LCEL**, the `.stream()` method only streams the output of the final step from the chain. This might be sufficient for some applications, but as you build more complex chains of several LLM calls together, you may want to use the intermediate values of the chain alongside the final output. For example, you may want to return sources alongside the final generation when building a chat-over-documents app.
|
108 |
+
|
109 |
+
There are ways to do this [using callbacks](/docs/concepts/callbacks), or by constructing your chain in such a way that it passes intermediate
|
110 |
+
values to the end with something like chained [`.assign()`](/docs/how_to/passthrough/) calls, but LangChain also includes an
|
111 |
+
`.astream_events()` method that combines the flexibility of callbacks with the ergonomics of `.stream()`. When called, it returns an iterator
|
112 |
+
which yields [various types of events](/docs/how_to/streaming/#event-reference) that you can filter and process according
|
113 |
+
to the needs of your project.
|
114 |
+
|
115 |
+
Here's one small example that prints just events containing streamed chat model output:
|
116 |
+
|
117 |
+
```python
|
118 |
+
from langchain_core.output_parsers import StrOutputParser
|
119 |
+
from langchain_core.prompts import ChatPromptTemplate
|
120 |
+
from langchain_anthropic import ChatAnthropic
|
121 |
+
|
122 |
+
model = ChatAnthropic(model="claude-3-sonnet-20240229")
|
123 |
+
|
124 |
+
prompt = ChatPromptTemplate.from_template("tell me a joke about {topic}")
|
125 |
+
parser = StrOutputParser()
|
126 |
+
chain = prompt | model | parser
|
127 |
+
|
128 |
+
async for event in chain.astream_events({"topic": "parrot"}):
|
129 |
+
kind = event["event"]
|
130 |
+
if kind == "on_chat_model_stream":
|
131 |
+
print(event, end="|", flush=True)
|
132 |
+
```
|
133 |
+
|
134 |
+
You can roughly think of it as an iterator over callback events (though the format differs) - and you can use it on almost all LangChain components!
|
135 |
+
|
136 |
+
See [this guide](/docs/how_to/streaming/#using-stream-events) for more detailed information on how to use `.astream_events()`, including a table listing available events.
|
137 |
+
|
138 |
+
## Writing custom data to the stream
|
139 |
+
|
140 |
+
To write custom data to the stream, you will need to choose one of the following methods based on the component you are working with:
|
141 |
+
|
142 |
+
1. LangGraph's [StreamWriter](https://langchain-ai.github.io/langgraph/reference/types/#langgraph.types.StreamWriter) can be used to write custom data that will surface through **stream** and **astream** APIs when working with LangGraph. **Important** this is a LangGraph feature, so it is not available when working with pure LCEL. See [how to streaming custom data](https://langchain-ai.github.io/langgraph/how-tos/streaming-content/) for more information.
|
143 |
+
2. [dispatch_events](https://python.langchain.com/api_reference/core/callbacks/langchain_core.callbacks.manager.dispatch_custom_event.html#) / [adispatch_events](https://python.langchain.com/api_reference/core/callbacks/langchain_core.callbacks.manager.adispatch_custom_event.html) can be used to write custom data that will be surfaced through the **astream_events** API. See [how to dispatch custom callback events](/docs/how_to/callbacks_custom_events/#astream-events-api) for more information.
|
144 |
+
|
145 |
+
## "Auto-Streaming" Chat Models
|
146 |
+
|
147 |
+
LangChain simplifies streaming from [chat models](/docs/concepts/chat_models) by automatically enabling streaming mode in certain cases, even when you’re not explicitly calling the streaming methods. This is particularly useful when you use the non-streaming `invoke` method but still want to stream the entire application, including intermediate results from the chat model.
|
148 |
+
|
149 |
+
### How It Works
|
150 |
+
|
151 |
+
When you call the `invoke` (or `ainvoke`) method on a chat model, LangChain will automatically switch to streaming mode if it detects that you are trying to stream the overall application.
|
152 |
+
|
153 |
+
Under the hood, it'll have `invoke` (or `ainvoke`) use the `stream` (or `astream`) method to generate its output. The result of the invocation will be the same as far as the code that was using `invoke` is concerned; however, while the chat model is being streamed, LangChain will take care of invoking `on_llm_new_token` events in LangChain's [callback system](/docs/concepts/callbacks). These callback events
|
154 |
+
allow LangGraph `stream`/`astream` and `astream_events` to surface the chat model's output in real-time.
|
155 |
+
|
156 |
+
Example:
|
157 |
+
|
158 |
+
```python
|
159 |
+
def node(state):
|
160 |
+
...
|
161 |
+
# The code below uses the invoke method, but LangChain will
|
162 |
+
# automatically switch to streaming mode
|
163 |
+
# when it detects that the overall
|
164 |
+
# application is being streamed.
|
165 |
+
ai_message = model.invoke(state["messages"])
|
166 |
+
...
|
167 |
+
|
168 |
+
for chunk in compiled_graph.stream(..., mode="messages"):
|
169 |
+
...
|
170 |
+
```
|
171 |
+
## Async Programming
|
172 |
+
|
173 |
+
LangChain offers both synchronous (sync) and asynchronous (async) versions of many of its methods. The async methods are typically prefixed with an "a" (e.g., `ainvoke`, `astream`). When writing async code, it's crucial to consistently use these asynchronous methods to ensure non-blocking behavior and optimal performance.
|
174 |
+
|
175 |
+
If streaming data fails to appear in real-time, please ensure that you are using the correct async methods for your workflow.
|
176 |
+
|
177 |
+
Please review the [async programming in LangChain guide](/docs/concepts/async) for more information on writing async code with LangChain.
|
178 |
+
|
179 |
+
## Related Resources
|
180 |
+
|
181 |
+
Please see the following how-to guides for specific examples of streaming in LangChain:
|
182 |
+
* [LangGraph conceptual guide on streaming](https://langchain-ai.github.io/langgraph/concepts/streaming/)
|
183 |
+
* [LangGraph streaming how-to guides](https://langchain-ai.github.io/langgraph/how-tos/#streaming)
|
184 |
+
* [How to stream runnables](/docs/how_to/streaming/): This how-to guide goes over common streaming patterns with LangChain components (e.g., chat models) and with [LCEL](/docs/concepts/lcel).
|
185 |
+
* [How to stream chat models](/docs/how_to/chat_streaming/)
|
186 |
+
* [How to stream tool calls](/docs/how_to/tool_streaming/)
|
187 |
+
|
188 |
+
For writing custom data to the stream, please see the following resources:
|
189 |
+
|
190 |
+
* If using LangGraph, see [how to stream custom data](https://langchain-ai.github.io/langgraph/how-tos/streaming-content/).
|
191 |
+
* If using LCEL, see [how to dispatch custom callback events](/docs/how_to/callbacks_custom_events/#astream-events-api).
|
langchain_md_files/concepts/structured_outputs.mdx
ADDED
@@ -0,0 +1,148 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Structured outputs
|
2 |
+
|
3 |
+
## Overview
|
4 |
+
|
5 |
+
For many applications, such as chatbots, models need to respond to users directly in natural language.
|
6 |
+
However, there are scenarios where we need models to output in a *structured format*.
|
7 |
+
For example, we might want to store the model output in a database and ensure that the output conforms to the database schema.
|
8 |
+
This need motivates the concept of structured output, where models can be instructed to respond with a particular output structure.
|
9 |
+
|
10 |
+

|
11 |
+
|
12 |
+
## Key concepts
|
13 |
+
|
14 |
+
**(1) Schema definition:** The output structure is represented as a schema, which can be defined in several ways.
|
15 |
+
**(2) Returning structured output:** The model is given this schema, and is instructed to return output that conforms to it.
|
16 |
+
|
17 |
+
## Recommended usage
|
18 |
+
|
19 |
+
This pseudo-code illustrates the recommended workflow when using structured output.
|
20 |
+
LangChain provides a method, [`with_structured_output()`](/docs/how_to/structured_output/#the-with_structured_output-method), that automates the process of binding the schema to the [model](/docs/concepts/chat_models/) and parsing the output.
|
21 |
+
This helper function is available for all model providers that support structured output.
|
22 |
+
|
23 |
+
```python
|
24 |
+
# Define schema
|
25 |
+
schema = {"foo": "bar"}
|
26 |
+
# Bind schema to model
|
27 |
+
model_with_structure = model.with_structured_output(schema)
|
28 |
+
# Invoke the model to produce structured output that matches the schema
|
29 |
+
structured_output = model_with_structure.invoke(user_input)
|
30 |
+
```
|
31 |
+
|
32 |
+
## Schema definition
|
33 |
+
|
34 |
+
The central concept is that the output structure of model responses needs to be represented in some way.
|
35 |
+
While types of objects you can use depend on the model you're working with, there are common types of objects that are typically allowed or recommended for structured output in Python.
|
36 |
+
|
37 |
+
The simplest and most common format for structured output is a JSON-like structure, which in Python can be represented as a dictionary (dict) or list (list).
|
38 |
+
JSON objects (or dicts in Python) are often used directly when the tool requires raw, flexible, and minimal-overhead structured data.
|
39 |
+
|
40 |
+
```json
|
41 |
+
{
|
42 |
+
"answer": "The answer to the user's question",
|
43 |
+
"followup_question": "A followup question the user could ask"
|
44 |
+
}
|
45 |
+
```
|
46 |
+
|
47 |
+
As a second example, [Pydantic](https://docs.pydantic.dev/latest/) is particularly useful for defining structured output schemas because it offers type hints and validation.
|
48 |
+
Here's an example of a Pydantic schema:
|
49 |
+
|
50 |
+
```python
|
51 |
+
from pydantic import BaseModel, Field
|
52 |
+
class ResponseFormatter(BaseModel):
|
53 |
+
"""Always use this tool to structure your response to the user."""
|
54 |
+
answer: str = Field(description="The answer to the user's question")
|
55 |
+
followup_question: str = Field(description="A followup question the user could ask")
|
56 |
+
|
57 |
+
```
|
58 |
+
|
59 |
+
## Returning structured output
|
60 |
+
|
61 |
+
With a schema defined, we need a way to instruct the model to use it.
|
62 |
+
While one approach is to include this schema in the prompt and *ask nicely* for the model to use it, this is not recommended.
|
63 |
+
Several more powerful methods that utilizes native features in the model provider's API are available.
|
64 |
+
|
65 |
+
### Using tool calling
|
66 |
+
|
67 |
+
Many [model providers support](/docs/integrations/chat/) tool calling, a concept discussed in more detail in our [tool calling guide](/docs/concepts/tool_calling/).
|
68 |
+
In short, tool calling involves binding a tool to a model and, when appropriate, the model can *decide* to call this tool and ensure its response conforms to the tool's schema.
|
69 |
+
With this in mind, the central concept is straightforward: *simply bind our schema to a model as a tool!*
|
70 |
+
Here is an example using the `ResponseFormatter` schema defined above:
|
71 |
+
|
72 |
+
```python
|
73 |
+
from langchain_openai import ChatOpenAI
|
74 |
+
model = ChatOpenAI(model="gpt-4o", temperature=0)
|
75 |
+
# Bind responseformatter schema as a tool to the model
|
76 |
+
model_with_tools = model.bind_tools([ResponseFormatter])
|
77 |
+
# Invoke the model
|
78 |
+
ai_msg = model_with_tools.invoke("What is the powerhouse of the cell?")
|
79 |
+
```
|
80 |
+
|
81 |
+
The arguments of the tool call are already extracted as a dictionary.
|
82 |
+
This dictionary can be optionally parsed into a Pydantic object, matching our original `ResponseFormatter` schema.
|
83 |
+
|
84 |
+
```python
|
85 |
+
# Get the tool call arguments
|
86 |
+
ai_msg.tool_calls[0]["args"]
|
87 |
+
{'answer': "The powerhouse of the cell is the mitochondrion. Mitochondria are organelles that generate most of the cell's supply of adenosine triphosphate (ATP), which is used as a source of chemical energy.",
|
88 |
+
'followup_question': 'What is the function of ATP in the cell?'}
|
89 |
+
# Parse the dictionary into a pydantic object
|
90 |
+
pydantic_object = ResponseFormatter.model_validate(ai_msg.tool_calls[0]["args"])
|
91 |
+
```
|
92 |
+
|
93 |
+
### JSON mode
|
94 |
+
|
95 |
+
In addition to tool calling, some model providers support a feature called `JSON mode`.
|
96 |
+
This supports JSON schema definition as input and enforces the model to produce a conforming JSON output.
|
97 |
+
You can find a table of model providers that support JSON mode [here](/docs/integrations/chat/).
|
98 |
+
Here is an example of how to use JSON mode with OpenAI:
|
99 |
+
|
100 |
+
```python
|
101 |
+
from langchain_openai import ChatOpenAI
|
102 |
+
model = ChatOpenAI(model="gpt-4o", model_kwargs={ "response_format": { "type": "json_object" } })
|
103 |
+
ai_msg = model.invoke("Return a JSON object with key 'random_ints' and a value of 10 random ints in [0-99]")
|
104 |
+
ai_msg.content
|
105 |
+
'\n{\n "random_ints": [23, 47, 89, 15, 34, 76, 58, 3, 62, 91]\n}'
|
106 |
+
```
|
107 |
+
|
108 |
+
One important point to flag: the model *still* returns a string, which needs to be parsed into a JSON object.
|
109 |
+
This can, of course, simply use the `json` library or a JSON output parser if you need more advanced functionality.
|
110 |
+
See this [how-to guide on the JSON output parser](/docs/how_to/output_parser_json) for more details.
|
111 |
+
|
112 |
+
```python
|
113 |
+
import json
|
114 |
+
json_object = json.loads(ai_msg.content)
|
115 |
+
{'random_ints': [23, 47, 89, 15, 34, 76, 58, 3, 62, 91]}
|
116 |
+
```
|
117 |
+
|
118 |
+
## Structured output method
|
119 |
+
|
120 |
+
There are a few challenges when producing structured output with the above methods:
|
121 |
+
|
122 |
+
(1) When tool calling is used, tool call arguments needs to be parsed from a dictionary back to the original schema.
|
123 |
+
|
124 |
+
(2) In addition, the model needs to be instructed to *always* use the tool when we want to enforce structured output, which is a provider specific setting.
|
125 |
+
|
126 |
+
(3) When JSON mode is used, the output needs to be parsed into a JSON object.
|
127 |
+
|
128 |
+
With these challenges in mind, LangChain provides a helper function (`with_structured_output()`) to streamline the process.
|
129 |
+
|
130 |
+

|
131 |
+
|
132 |
+
This both binds the schema to the model as a tool and parses the output to the specified output schema.
|
133 |
+
|
134 |
+
```python
|
135 |
+
# Bind the schema to the model
|
136 |
+
model_with_structure = model.with_structured_output(ResponseFormatter)
|
137 |
+
# Invoke the model
|
138 |
+
structured_output = model_with_structure.invoke("What is the powerhouse of the cell?")
|
139 |
+
# Get back the pydantic object
|
140 |
+
structured_output
|
141 |
+
ResponseFormatter(answer="The powerhouse of the cell is the mitochondrion. Mitochondria are organelles that generate most of the cell's supply of adenosine triphosphate (ATP), which is used as a source of chemical energy.", followup_question='What is the function of ATP in the cell?')
|
142 |
+
```
|
143 |
+
|
144 |
+
:::info[Further reading]
|
145 |
+
|
146 |
+
For more details on usage, see our [how-to guide](/docs/how_to/structured_output/#the-with_structured_output-method).
|
147 |
+
|
148 |
+
:::
|
langchain_md_files/concepts/testing.mdx
ADDED
@@ -0,0 +1,81 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Testing
|
2 |
+
<span data-heading-keywords="tests,testing,unit,integration"></span>
|
3 |
+
|
4 |
+
Testing is a critical part of the development process that ensures your code works as expected and meets the desired quality standards.
|
5 |
+
|
6 |
+
In the LangChain ecosystem, we have 2 main types of tests: **unit tests** and **integration tests**.
|
7 |
+
|
8 |
+
For integrations that implement standard LangChain abstractions, we have a set of **standard tests** (both unit and integration) that help maintain compatibility between different components and ensure reliability of high-usage ones.
|
9 |
+
|
10 |
+
## Unit Tests
|
11 |
+
|
12 |
+
**Definition**: Unit tests are designed to validate the smallest parts of your code—individual functions or methods—ensuring they work as expected in isolation. They do not rely on external systems or integrations.
|
13 |
+
|
14 |
+
**Example**: Testing the `convert_langchain_aimessage_to_dict` function to confirm it correctly converts an AI message to a dictionary format:
|
15 |
+
|
16 |
+
```python
|
17 |
+
from langchain_core.messages import AIMessage, ToolCall, convert_to_openai_messages
|
18 |
+
|
19 |
+
def test_convert_to_openai_messages():
|
20 |
+
ai_message = AIMessage(
|
21 |
+
content="Let me call that tool for you!",
|
22 |
+
tool_calls=[
|
23 |
+
ToolCall(name='parrot_multiply_tool', id='1', args={'a': 2, 'b': 3}),
|
24 |
+
]
|
25 |
+
)
|
26 |
+
|
27 |
+
result = convert_to_openai_messages(ai_message)
|
28 |
+
|
29 |
+
expected = {
|
30 |
+
"role": "assistant",
|
31 |
+
"tool_calls": [
|
32 |
+
{
|
33 |
+
"type": "function",
|
34 |
+
"id": "1",
|
35 |
+
"function": {
|
36 |
+
"name": "parrot_multiply_tool",
|
37 |
+
"arguments": '{"a": 2, "b": 3}',
|
38 |
+
},
|
39 |
+
}
|
40 |
+
],
|
41 |
+
"content": "Let me call that tool for you!",
|
42 |
+
}
|
43 |
+
assert result == expected # Ensure conversion matches expected output
|
44 |
+
```
|
45 |
+
|
46 |
+
---
|
47 |
+
|
48 |
+
## Integration Tests
|
49 |
+
|
50 |
+
**Definition**: Integration tests validate that multiple components or systems work together as expected. For tools or integrations relying on external services, these tests often ensure end-to-end functionality.
|
51 |
+
|
52 |
+
**Example**: Testing `ParrotMultiplyTool` with access to an API service that multiplies two numbers and adds 80:
|
53 |
+
|
54 |
+
```python
|
55 |
+
def test_integration_with_service():
|
56 |
+
tool = ParrotMultiplyTool()
|
57 |
+
result = tool.invoke({"a": 2, "b": 3})
|
58 |
+
assert result == 86
|
59 |
+
```
|
60 |
+
|
61 |
+
---
|
62 |
+
|
63 |
+
## Standard Tests
|
64 |
+
|
65 |
+
**Definition**: Standard tests are pre-defined tests provided by LangChain to ensure consistency and reliability across all tools and integrations. They include both unit and integration test templates tailored for LangChain components.
|
66 |
+
|
67 |
+
**Example**: Subclassing LangChain's `ToolsUnitTests` or `ToolsIntegrationTests` to automatically run standard tests:
|
68 |
+
|
69 |
+
```python
|
70 |
+
from langchain_tests.unit_tests import ToolsUnitTests
|
71 |
+
|
72 |
+
class TestParrotMultiplyToolUnit(ToolsUnitTests):
|
73 |
+
@property
|
74 |
+
def tool_constructor(self):
|
75 |
+
return ParrotMultiplyTool
|
76 |
+
|
77 |
+
def tool_invoke_params_example(self):
|
78 |
+
return {"a": 2, "b": 3}
|
79 |
+
```
|
80 |
+
|
81 |
+
To learn more, check out our guide on [how to add standard tests to an integration](../../contributing/how_to/integrations/standard_tests).
|
langchain_md_files/concepts/text_llms.mdx
ADDED
@@ -0,0 +1,10 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# String-in, string-out llms
|
2 |
+
|
3 |
+
:::tip
|
4 |
+
You are probably looking for the [Chat Model Concept Guide](/docs/concepts/chat_models) page for more information.
|
5 |
+
:::
|
6 |
+
|
7 |
+
LangChain has implementations for older language models that take a string as input and return a string as output. These models are typically named without the "Chat" prefix (e.g., `Ollama`, `Anthropic`, `OpenAI`, etc.), and may include the "LLM" suffix (e.g., `OllamaLLM`, `AnthropicLLM`, `OpenAILLM`, etc.). These models implement the [BaseLLM](https://python.langchain.com/api_reference/core/language_models/langchain_core.language_models.llms.BaseLLM.html#langchain_core.language_models.llms.BaseLLM) interface.
|
8 |
+
|
9 |
+
Users should be using almost exclusively the newer [Chat Models](/docs/concepts/chat_models) as most
|
10 |
+
model providers have adopted a chat like interface for interacting with language models.
|
langchain_md_files/concepts/text_splitters.mdx
ADDED
@@ -0,0 +1,135 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Text splitters
|
2 |
+
<span data-heading-keywords="text splitter,text splitting"></span>
|
3 |
+
|
4 |
+
:::info[Prerequisites]
|
5 |
+
|
6 |
+
* [Documents](/docs/concepts/retrievers/#interface)
|
7 |
+
* Tokenization(/docs/concepts/tokens)
|
8 |
+
:::
|
9 |
+
|
10 |
+
## Overview
|
11 |
+
|
12 |
+
Document splitting is often a crucial preprocessing step for many applications.
|
13 |
+
It involves breaking down large texts into smaller, manageable chunks.
|
14 |
+
This process offers several benefits, such as ensuring consistent processing of varying document lengths, overcoming input size limitations of models, and improving the quality of text representations used in retrieval systems.
|
15 |
+
There are several strategies for splitting documents, each with its own advantages.
|
16 |
+
|
17 |
+
## Key concepts
|
18 |
+
|
19 |
+

|
20 |
+
|
21 |
+
Text splitters split documents into smaller chunks for use in downstream applications.
|
22 |
+
|
23 |
+
## Why split documents?
|
24 |
+
|
25 |
+
There are several reasons to split documents:
|
26 |
+
|
27 |
+
- **Handling non-uniform document lengths**: Real-world document collections often contain texts of varying sizes. Splitting ensures consistent processing across all documents.
|
28 |
+
- **Overcoming model limitations**: Many embedding models and language models have maximum input size constraints. Splitting allows us to process documents that would otherwise exceed these limits.
|
29 |
+
- **Improving representation quality**: For longer documents, the quality of embeddings or other representations may degrade as they try to capture too much information. Splitting can lead to more focused and accurate representations of each section.
|
30 |
+
- **Enhancing retrieval precision**: In information retrieval systems, splitting can improve the granularity of search results, allowing for more precise matching of queries to relevant document sections.
|
31 |
+
- **Optimizing computational resources**: Working with smaller chunks of text can be more memory-efficient and allow for better parallelization of processing tasks.
|
32 |
+
|
33 |
+
Now, the next question is *how* to split the documents into chunks! There are several strategies, each with its own advantages.
|
34 |
+
|
35 |
+
:::info[Further reading]
|
36 |
+
* See Greg Kamradt's [chunkviz](https://chunkviz.up.railway.app/) to visualize different splitting strategies discussed below.
|
37 |
+
:::
|
38 |
+
|
39 |
+
## Approaches
|
40 |
+
|
41 |
+
### Length-based
|
42 |
+
|
43 |
+
The most intuitive strategy is to split documents based on their length. This simple yet effective approach ensures that each chunk doesn't exceed a specified size limit.
|
44 |
+
Key benefits of length-based splitting:
|
45 |
+
- Straightforward implementation
|
46 |
+
- Consistent chunk sizes
|
47 |
+
- Easily adaptable to different model requirements
|
48 |
+
|
49 |
+
Types of length-based splitting:
|
50 |
+
- **Token-based**: Splits text based on the number of tokens, which is useful when working with language models.
|
51 |
+
- **Character-based**: Splits text based on the number of characters, which can be more consistent across different types of text.
|
52 |
+
|
53 |
+
Example implementation using LangChain's `CharacterTextSplitter` with token-based splitting:
|
54 |
+
|
55 |
+
```python
|
56 |
+
from langchain_text_splitters import CharacterTextSplitter
|
57 |
+
text_splitter = CharacterTextSplitter.from_tiktoken_encoder(
|
58 |
+
encoding_name="cl100k_base", chunk_size=100, chunk_overlap=0
|
59 |
+
)
|
60 |
+
texts = text_splitter.split_text(document)
|
61 |
+
```
|
62 |
+
|
63 |
+
:::info[Further reading]
|
64 |
+
|
65 |
+
* See the how-to guide for [token-based](/docs/how_to/split_by_token/) splitting.
|
66 |
+
* See the how-to guide for [character-based](/docs/how_to/character_text_splitter/) splitting.
|
67 |
+
|
68 |
+
:::
|
69 |
+
|
70 |
+
### Text-structured based
|
71 |
+
|
72 |
+
Text is naturally organized into hierarchical units such as paragraphs, sentences, and words.
|
73 |
+
We can leverage this inherent structure to inform our splitting strategy, creating split that maintain natural language flow, maintain semantic coherence within split, and adapts to varying levels of text granularity.
|
74 |
+
LangChain's [`RecursiveCharacterTextSplitter`](/docs/how_to/recursive_text_splitter/) implements this concept:
|
75 |
+
- The `RecursiveCharacterTextSplitter` attempts to keep larger units (e.g., paragraphs) intact.
|
76 |
+
- If a unit exceeds the chunk size, it moves to the next level (e.g., sentences).
|
77 |
+
- This process continues down to the word level if necessary.
|
78 |
+
|
79 |
+
Here is example usage:
|
80 |
+
|
81 |
+
```python
|
82 |
+
from langchain_text_splitters import RecursiveCharacterTextSplitter
|
83 |
+
text_splitter = RecursiveCharacterTextSplitter(chunk_size=100, chunk_overlap=0)
|
84 |
+
texts = text_splitter.split_text(document)
|
85 |
+
```
|
86 |
+
|
87 |
+
:::info[Further reading]
|
88 |
+
|
89 |
+
* See the how-to guide for [recursive text splitting](/docs/how_to/recursive_text_splitter/).
|
90 |
+
|
91 |
+
:::
|
92 |
+
|
93 |
+
### Document-structured based
|
94 |
+
|
95 |
+
Some documents have an inherent structure, such as HTML, Markdown, or JSON files.
|
96 |
+
In these cases, it's beneficial to split the document based on its structure, as it often naturally groups semantically related text.
|
97 |
+
Key benefits of structure-based splitting:
|
98 |
+
- Preserves the logical organization of the document
|
99 |
+
- Maintains context within each chunk
|
100 |
+
- Can be more effective for downstream tasks like retrieval or summarization
|
101 |
+
|
102 |
+
Examples of structure-based splitting:
|
103 |
+
- **Markdown**: Split based on headers (e.g., #, ##, ###)
|
104 |
+
- **HTML**: Split using tags
|
105 |
+
- **JSON**: Split by object or array elements
|
106 |
+
- **Code**: Split by functions, classes, or logical blocks
|
107 |
+
|
108 |
+
:::info[Further reading]
|
109 |
+
|
110 |
+
* See the how-to guide for [Markdown splitting](/docs/how_to/markdown_header_metadata_splitter/).
|
111 |
+
* See the how-to guide for [Recursive JSON splitting](/docs/how_to/recursive_json_splitter/).
|
112 |
+
* See the how-to guide for [Code splitting](/docs/how_to/code_splitter/).
|
113 |
+
* See the how-to guide for [HTML splitting](/docs/how_to/split_html/).
|
114 |
+
|
115 |
+
:::
|
116 |
+
|
117 |
+
### Semantic meaning based
|
118 |
+
|
119 |
+
Unlike the previous methods, semantic-based splitting actually considers the *content* of the text.
|
120 |
+
While other approaches use document or text structure as proxies for semantic meaning, this method directly analyzes the text's semantics.
|
121 |
+
There are several ways to implement this, but conceptually the approach is split text when there are significant changes in text *meaning*.
|
122 |
+
As an example, we can use a sliding window approach to generate embeddings, and compare the embeddings to find significant differences:
|
123 |
+
|
124 |
+
- Start with the first few sentences and generate an embedding.
|
125 |
+
- Move to the next group of sentences and generate another embedding (e.g., using a sliding window approach).
|
126 |
+
- Compare the embeddings to find significant differences, which indicate potential "break points" between semantic sections.
|
127 |
+
|
128 |
+
This technique helps create chunks that are more semantically coherent, potentially improving the quality of downstream tasks like retrieval or summarization.
|
129 |
+
|
130 |
+
:::info[Further reading]
|
131 |
+
|
132 |
+
* See the how-to guide for [splitting text based on semantic meaning](/docs/how_to/semantic-chunker/).
|
133 |
+
* See Greg Kamradt's [notebook](https://github.com/FullStackRetrieval-com/RetrievalTutorials/blob/main/tutorials/LevelsOfTextSplitting/5_Levels_Of_Text_Splitting.ipynb) showcasing semantic splitting.
|
134 |
+
|
135 |
+
:::
|
langchain_md_files/concepts/tokens.mdx
ADDED
@@ -0,0 +1,58 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Tokens
|
2 |
+
|
3 |
+
Modern large language models (LLMs) are typically based on a transformer architecture that processes a sequence of units known as tokens. Tokens are the fundamental elements that models use to break down input and generate output. In this section, we'll discuss what tokens are and how they are used by language models.
|
4 |
+
|
5 |
+
## What is a token?
|
6 |
+
|
7 |
+
A **token** is the basic unit that a language model reads, processes, and generates. These units can vary based on how the model provider defines them, but in general, they could represent:
|
8 |
+
|
9 |
+
* A whole word (e.g., "apple"),
|
10 |
+
* A part of a word (e.g., "app"),
|
11 |
+
* Or other linguistic components such as punctuation or spaces.
|
12 |
+
|
13 |
+
The way the model tokenizes the input depends on its **tokenizer algorithm**, which converts the input into tokens. Similarly, the model’s output comes as a stream of tokens, which is then decoded back into human-readable text.
|
14 |
+
|
15 |
+
## How tokens work in language models
|
16 |
+
|
17 |
+
The reason language models use tokens is tied to how they understand and predict language. Rather than processing characters or entire sentences directly, language models focus on **tokens**, which represent meaningful linguistic units. Here's how the process works:
|
18 |
+
|
19 |
+
1. **Input Tokenization**: When you provide a model with a prompt (e.g., "LangChain is cool!"), the tokenizer algorithm splits the text into tokens. For example, the sentence could be tokenized into parts like `["Lang", "Chain", " is", " cool", "!"]`. Note that token boundaries don’t always align with word boundaries.
|
20 |
+

|
21 |
+
|
22 |
+
2. **Processing**: The transformer architecture behind these models processes tokens sequentially to predict the next token in a sentence. It does this by analyzing the relationships between tokens, capturing context and meaning from the input.
|
23 |
+
3. **Output Generation**: The model generates new tokens one by one. These output tokens are then decoded back into human-readable text.
|
24 |
+
|
25 |
+
Using tokens instead of raw characters allows the model to focus on linguistically meaningful units, which helps it capture grammar, structure, and context more effectively.
|
26 |
+
|
27 |
+
## Tokens don’t have to be text
|
28 |
+
|
29 |
+
Although tokens are most commonly used to represent text, they don’t have to be limited to textual data. Tokens can also serve as abstract representations of **multi-modal data**, such as:
|
30 |
+
|
31 |
+
- **Images**,
|
32 |
+
- **Audio**,
|
33 |
+
- **Video**,
|
34 |
+
- And other types of data.
|
35 |
+
|
36 |
+
At the time of writing, virtually no models support **multi-modal output**, and only a few models can handle **multi-modal inputs** (e.g., text combined with images or audio). However, as advancements in AI continue, we expect **multi-modality** to become much more common. This would allow models to process and generate a broader range of media, significantly expanding the scope of what tokens can represent and how models can interact with diverse types of data.
|
37 |
+
|
38 |
+
:::note
|
39 |
+
In principle, **anything that can be represented as a sequence of tokens** could be modeled in a similar way. For example, **DNA sequences**—which are composed of a series of nucleotides (A, T, C, G)—can be tokenized and modeled to capture patterns, make predictions, or generate sequences. This flexibility allows transformer-based models to handle diverse types of sequential data, further broadening their potential applications across various domains, including bioinformatics, signal processing, and other fields that involve structured or unstructured sequences.
|
40 |
+
:::
|
41 |
+
|
42 |
+
Please see the [multimodality](/docs/concepts/multimodality) section for more information on multi-modal inputs and outputs.
|
43 |
+
|
44 |
+
## Why not use characters?
|
45 |
+
|
46 |
+
Using tokens instead of individual characters makes models both more efficient and better at understanding context and grammar. Tokens represent meaningful units, like whole words or parts of words, allowing models to capture language structure more effectively than by processing raw characters. Token-level processing also reduces the number of units the model has to handle, leading to faster computation.
|
47 |
+
|
48 |
+
In contrast, character-level processing would require handling a much larger sequence of input, making it harder for the model to learn relationships and context. Tokens enable models to focus on linguistic meaning, making them more accurate and efficient in generating responses.
|
49 |
+
|
50 |
+
## How tokens correspond to text
|
51 |
+
|
52 |
+
Please see this post from [OpenAI](https://help.openai.com/en/articles/4936856-what-are-tokens-and-how-to-count-them) for more details on how tokens are counted and how they correspond to text.
|
53 |
+
|
54 |
+
According to the OpenAI post, the approximate token counts for English text are as follows:
|
55 |
+
|
56 |
+
* 1 token ~= 4 chars in English
|
57 |
+
* 1 token ~= ¾ words
|
58 |
+
* 100 tokens ~= 75 words
|
langchain_md_files/concepts/tool_calling.mdx
ADDED
@@ -0,0 +1,149 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Tool calling
|
2 |
+
|
3 |
+
:::info[Prerequisites]
|
4 |
+
* [Tools](/docs/concepts/tools)
|
5 |
+
* [Chat Models](/docs/concepts/chat_models)
|
6 |
+
:::
|
7 |
+
|
8 |
+
|
9 |
+
## Overview
|
10 |
+
|
11 |
+
Many AI applications interact directly with humans. In these cases, it is appropriate for models to respond in natural language.
|
12 |
+
But what about cases where we want a model to also interact *directly* with systems, such as databases or an API?
|
13 |
+
These systems often have a particular input schema; for example, APIs frequently have a required payload structure.
|
14 |
+
This need motivates the concept of *tool calling*. You can use [tool calling](https://platform.openai.com/docs/guides/function-calling/example-use-cases) to request model responses that match a particular schema.
|
15 |
+
|
16 |
+
:::info
|
17 |
+
You will sometimes hear the term `function calling`. We use this term interchangeably with `tool calling`.
|
18 |
+
:::
|
19 |
+
|
20 |
+

|
21 |
+
|
22 |
+
## Key concepts
|
23 |
+
|
24 |
+
**(1) Tool Creation:** Use the [@tool](https://python.langchain.com/api_reference/core/tools/langchain_core.tools.convert.tool.html) decorator to create a [tool](/docs/concepts/tools). A tool is an association between a function and its schema.
|
25 |
+
**(2) Tool Binding:** The tool needs to be connected to a model that supports tool calling. This gives the model awareness of the tool and the associated input schema required by the tool.
|
26 |
+
**(3) Tool Calling:** When appropriate, the model can decide to call a tool and ensure its response conforms to the tool's input schema.
|
27 |
+
**(4) Tool Execution:** The tool can be executed using the arguments provided by the model.
|
28 |
+
|
29 |
+

|
30 |
+
|
31 |
+
## Recommended usage
|
32 |
+
|
33 |
+
This pseudo-code illustrates the recommended workflow for using tool calling.
|
34 |
+
Created tools are passed to `.bind_tools()` method as a list.
|
35 |
+
This model can be called, as usual. If a tool call is made, model's response will contain the tool call arguments.
|
36 |
+
The tool call arguments can be passed directly to the tool.
|
37 |
+
|
38 |
+
```python
|
39 |
+
# Tool creation
|
40 |
+
tools = [my_tool]
|
41 |
+
# Tool binding
|
42 |
+
model_with_tools = model.bind_tools(tools)
|
43 |
+
# Tool calling
|
44 |
+
response = model_with_tools.invoke(user_input)
|
45 |
+
```
|
46 |
+
|
47 |
+
## Tool creation
|
48 |
+
|
49 |
+
The recommended way to create a tool is using the `@tool` decorator.
|
50 |
+
|
51 |
+
```python
|
52 |
+
from langchain_core.tools import tool
|
53 |
+
|
54 |
+
@tool
|
55 |
+
def multiply(a: int, b: int) -> int:
|
56 |
+
"""Multiply a and b."""
|
57 |
+
return a * b
|
58 |
+
```
|
59 |
+
|
60 |
+
:::info[Further reading]
|
61 |
+
|
62 |
+
* See our conceptual guide on [tools](/docs/concepts/tools/) for more details.
|
63 |
+
* See our [model integrations](/docs/integrations/chat/) that support tool calling.
|
64 |
+
* See our [how-to guide](/docs/how_to/tool_calling/) on tool calling.
|
65 |
+
|
66 |
+
:::
|
67 |
+
|
68 |
+
## Tool binding
|
69 |
+
|
70 |
+
[Many](https://platform.openai.com/docs/guides/function-calling) [model providers](https://platform.openai.com/docs/guides/function-calling) support tool calling.
|
71 |
+
|
72 |
+
:::tip
|
73 |
+
See our [model integration page](/docs/integrations/chat/) for a list of providers that support tool calling.
|
74 |
+
:::
|
75 |
+
|
76 |
+
The central concept to understand is that LangChain provides a standardized interface for connecting tools to models.
|
77 |
+
The `.bind_tools()` method can be used to specify which tools are available for a model to call.
|
78 |
+
|
79 |
+
```python
|
80 |
+
model_with_tools = model.bind_tools(tools_list)
|
81 |
+
```
|
82 |
+
|
83 |
+
As a specific example, let's take a function `multiply` and bind it as a tool to a model that supports tool calling.
|
84 |
+
|
85 |
+
```python
|
86 |
+
def multiply(a: int, b: int) -> int:
|
87 |
+
"""Multiply a and b.
|
88 |
+
|
89 |
+
Args:
|
90 |
+
a: first int
|
91 |
+
b: second int
|
92 |
+
"""
|
93 |
+
return a * b
|
94 |
+
|
95 |
+
llm_with_tools = tool_calling_model.bind_tools([multiply])
|
96 |
+
```
|
97 |
+
|
98 |
+
## Tool calling
|
99 |
+
|
100 |
+

|
101 |
+
|
102 |
+
A key principle of tool calling is that the model decides when to use a tool based on the input's relevance. The model doesn't always need to call a tool.
|
103 |
+
For example, given an unrelated input, the model would not call the tool:
|
104 |
+
|
105 |
+
```python
|
106 |
+
result = llm_with_tools.invoke("Hello world!")
|
107 |
+
```
|
108 |
+
|
109 |
+
The result would be an `AIMessage` containing the model's response in natural language (e.g., "Hello!").
|
110 |
+
However, if we pass an input *relevant to the tool*, the model should choose to call it:
|
111 |
+
|
112 |
+
```python
|
113 |
+
result = llm_with_tools.invoke("What is 2 multiplied by 3?")
|
114 |
+
```
|
115 |
+
|
116 |
+
As before, the output `result` will be an `AIMessage`.
|
117 |
+
But, if the tool was called, `result` will have a `tool_calls` attribute.
|
118 |
+
This attribute includes everything needed to execute the tool, including the tool name and input arguments:
|
119 |
+
|
120 |
+
```
|
121 |
+
result.tool_calls
|
122 |
+
{'name': 'multiply', 'args': {'a': 2, 'b': 3}, 'id': 'xxx', 'type': 'tool_call'}
|
123 |
+
```
|
124 |
+
|
125 |
+
For more details on usage, see our [how-to guides](/docs/how_to/#tools)!
|
126 |
+
|
127 |
+
## Tool execution
|
128 |
+
|
129 |
+
[Tools](/docs/concepts/tools/) implement the [Runnable](/docs/concepts/runnables/) interface, which means that they can be invoked (e.g., `tool.invoke(args)`) directly.
|
130 |
+
|
131 |
+
[LangGraph](https://langchain-ai.github.io/langgraph/) offers pre-built components (e.g., [`ToolNode`](https://langchain-ai.github.io/langgraph/reference/prebuilt/#langgraph.prebuilt.tool_node.ToolNode)) that will often invoke the tool in behalf of the user.
|
132 |
+
|
133 |
+
:::info[Further reading]
|
134 |
+
|
135 |
+
* See our [how-to guide](/docs/how_to/tool_calling/) on tool calling.
|
136 |
+
* See the [LangGraph documentation on using ToolNode](https://langchain-ai.github.io/langgraph/how-tos/tool-calling/).
|
137 |
+
|
138 |
+
:::
|
139 |
+
|
140 |
+
## Best practices
|
141 |
+
|
142 |
+
When designing [tools](/docs/concepts/tools/) to be used by a model, it is important to keep in mind that:
|
143 |
+
|
144 |
+
* Models that have explicit [tool-calling APIs](/docs/concepts/tool_calling) will be better at tool calling than non-fine-tuned models.
|
145 |
+
* Models will perform better if the tools have well-chosen names and descriptions.
|
146 |
+
* Simple, narrowly scoped tools are easier for models to use than complex tools.
|
147 |
+
* Asking the model to select from a large list of tools poses challenges for the model.
|
148 |
+
|
149 |
+
|
langchain_md_files/concepts/tools.mdx
ADDED
@@ -0,0 +1,211 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Tools
|
2 |
+
|
3 |
+
:::info Prerequisites
|
4 |
+
- [Chat models](/docs/concepts/chat_models/)
|
5 |
+
:::
|
6 |
+
|
7 |
+
## Overview
|
8 |
+
|
9 |
+
The **tool** abstraction in LangChain associates a Python **function** with a **schema** that defines the function's **name**, **description** and **expected arguments**.
|
10 |
+
|
11 |
+
**Tools** can be passed to [chat models](/docs/concepts/chat_models) that support [tool calling](/docs/concepts/tool_calling) allowing the model to request the execution of a specific function with specific inputs.
|
12 |
+
|
13 |
+
## Key concepts
|
14 |
+
|
15 |
+
- Tools are a way to encapsulate a function and its schema in a way that can be passed to a chat model.
|
16 |
+
- Create tools using the [@tool](https://python.langchain.com/api_reference/core/tools/langchain_core.tools.convert.tool.html) decorator, which simplifies the process of tool creation, supporting the following:
|
17 |
+
- Automatically infer the tool's **name**, **description** and **expected arguments**, while also supporting customization.
|
18 |
+
- Defining tools that return **artifacts** (e.g. images, dataframes, etc.)
|
19 |
+
- Hiding input arguments from the schema (and hence from the model) using **injected tool arguments**.
|
20 |
+
|
21 |
+
## Tool interface
|
22 |
+
|
23 |
+
The tool interface is defined in the [BaseTool](https://python.langchain.com/api_reference/core/tools/langchain_core.tools.base.BaseTool.html#langchain_core.tools.base.BaseTool) class which is a subclass of the [Runnable Interface](/docs/concepts/runnables).
|
24 |
+
|
25 |
+
The key attributes that correspond to the tool's **schema**:
|
26 |
+
|
27 |
+
- **name**: The name of the tool.
|
28 |
+
- **description**: A description of what the tool does.
|
29 |
+
- **args**: Property that returns the JSON schema for the tool's arguments.
|
30 |
+
|
31 |
+
The key methods to execute the function associated with the **tool**:
|
32 |
+
|
33 |
+
- **invoke**: Invokes the tool with the given arguments.
|
34 |
+
- **ainvoke**: Invokes the tool with the given arguments, asynchronously. Used for [async programming with Langchain](/docs/concepts/async).
|
35 |
+
|
36 |
+
## Create tools using the `@tool` decorator
|
37 |
+
|
38 |
+
The recommended way to create tools is using the [@tool](https://python.langchain.com/api_reference/core/tools/langchain_core.tools.convert.tool.html) decorator. This decorator is designed to simplify the process of tool creation and should be used in most cases. After defining a function, you can decorate it with [@tool](https://python.langchain.com/api_reference/core/tools/langchain_core.tools.convert.tool.html) to create a tool that implements the [Tool Interface](#tool-interface).
|
39 |
+
|
40 |
+
```python
|
41 |
+
from langchain_core.tools import tool
|
42 |
+
|
43 |
+
@tool
|
44 |
+
def multiply(a: int, b: int) -> int:
|
45 |
+
"""Multiply two numbers."""
|
46 |
+
return a * b
|
47 |
+
```
|
48 |
+
|
49 |
+
For more details on how to create tools, see the [how to create custom tools](/docs/how_to/custom_tools/) guide.
|
50 |
+
|
51 |
+
:::note
|
52 |
+
LangChain has a few other ways to create tools; e.g., by sub-classing the [BaseTool](https://python.langchain.com/api_reference/core/tools/langchain_core.tools.base.BaseTool.html#langchain_core.tools.base.BaseTool) class or by using `StructuredTool`. These methods are shown in the [how to create custom tools guide](/docs/how_to/custom_tools/), but
|
53 |
+
we generally recommend using the `@tool` decorator for most cases.
|
54 |
+
:::
|
55 |
+
|
56 |
+
## Use the tool directly
|
57 |
+
|
58 |
+
Once you have defined a tool, you can use it directly by calling the function. For example, to use the `multiply` tool defined above:
|
59 |
+
|
60 |
+
```python
|
61 |
+
multiply.invoke({"a": 2, "b": 3})
|
62 |
+
```
|
63 |
+
|
64 |
+
### Inspect
|
65 |
+
|
66 |
+
You can also inspect the tool's schema and other properties:
|
67 |
+
|
68 |
+
```python
|
69 |
+
print(multiply.name) # multiply
|
70 |
+
print(multiply.description) # Multiply two numbers.
|
71 |
+
print(multiply.args)
|
72 |
+
# {
|
73 |
+
# 'type': 'object',
|
74 |
+
# 'properties': {'a': {'type': 'integer'}, 'b': {'type': 'integer'}},
|
75 |
+
# 'required': ['a', 'b']
|
76 |
+
# }
|
77 |
+
```
|
78 |
+
|
79 |
+
:::note
|
80 |
+
If you're using pre-built LangChain or LangGraph components like [create_react_agent](https://langchain-ai.github.io/langgraph/reference/prebuilt/#langgraph.prebuilt.chat_agent_executor.create_react_agent),you might not need to interact with tools directly. However, understanding how to use them can be valuable for debugging and testing. Additionally, when building custom LangGraph workflows, you may find it necessary to work with tools directly.
|
81 |
+
:::
|
82 |
+
|
83 |
+
## Configuring the schema
|
84 |
+
|
85 |
+
The `@tool` decorator offers additional options to configure the schema of the tool (e.g., modify name, description
|
86 |
+
or parse the function's doc-string to infer the schema).
|
87 |
+
|
88 |
+
Please see the [API reference for @tool](https://python.langchain.com/api_reference/core/tools/langchain_core.tools.convert.tool.html) for more details and review the [how to create custom tools](/docs/how_to/custom_tools/) guide for examples.
|
89 |
+
|
90 |
+
## Tool artifacts
|
91 |
+
|
92 |
+
**Tools** are utilities that can be called by a model, and whose outputs are designed to be fed back to a model. Sometimes, however, there are artifacts of a tool's execution that we want to make accessible to downstream components in our chain or agent, but that we don't want to expose to the model itself. For example if a tool returns a custom object, a dataframe or an image, we may want to pass some metadata about this output to the model without passing the actual output to the model. At the same time, we may want to be able to access this full output elsewhere, for example in downstream tools.
|
93 |
+
|
94 |
+
```python
|
95 |
+
@tool(response_format="content_and_artifact")
|
96 |
+
def some_tool(...) -> Tuple[str, Any]:
|
97 |
+
"""Tool that does something."""
|
98 |
+
...
|
99 |
+
return 'Message for chat model', some_artifact
|
100 |
+
```
|
101 |
+
|
102 |
+
See [how to return artifacts from tools](/docs/how_to/tool_artifacts/) for more details.
|
103 |
+
|
104 |
+
## Special type annotations
|
105 |
+
|
106 |
+
There are a number of special type annotations that can be used in the tool's function signature to configure the run time behavior of the tool.
|
107 |
+
|
108 |
+
The following type annotations will end up **removing** the argument from the tool's schema. This can be useful for arguments that should not be exposed to the model and that the model should not be able to control.
|
109 |
+
|
110 |
+
- **InjectedToolArg**: Value should be injected manually at runtime using `.invoke` or `.ainvoke`.
|
111 |
+
- **RunnableConfig**: Pass in the RunnableConfig object to the tool.
|
112 |
+
- **InjectedState**: Pass in the overall state of the LangGraph graph to the tool.
|
113 |
+
- **InjectedStore**: Pass in the LangGraph store object to the tool.
|
114 |
+
|
115 |
+
You can also use the `Annotated` type with a string literal to provide a **description** for the corresponding argument that **WILL** be exposed in the tool's schema.
|
116 |
+
|
117 |
+
- **Annotated[..., "string literal"]** -- Adds a description to the argument that will be exposed in the tool's schema.
|
118 |
+
|
119 |
+
### InjectedToolArg
|
120 |
+
|
121 |
+
There are cases where certain arguments need to be passed to a tool at runtime but should not be generated by the model itself. For this, we use the `InjectedToolArg` annotation, which allows certain parameters to be hidden from the tool's schema.
|
122 |
+
|
123 |
+
For example, if a tool requires a `user_id` to be injected dynamically at runtime, it can be structured in this way:
|
124 |
+
|
125 |
+
```python
|
126 |
+
from langchain_core.tools import tool, InjectedToolArg
|
127 |
+
|
128 |
+
@tool
|
129 |
+
def user_specific_tool(input_data: str, user_id: InjectedToolArg) -> str:
|
130 |
+
"""Tool that processes input data."""
|
131 |
+
return f"User {user_id} processed {input_data}"
|
132 |
+
```
|
133 |
+
|
134 |
+
Annotating the `user_id` argument with `InjectedToolArg` tells LangChain that this argument should not be exposed as part of the
|
135 |
+
tool's schema.
|
136 |
+
|
137 |
+
See [how to pass run time values to tools](/docs/how_to/tool_runtime/) for more details on how to use `InjectedToolArg`.
|
138 |
+
|
139 |
+
|
140 |
+
### RunnableConfig
|
141 |
+
|
142 |
+
You can use the `RunnableConfig` object to pass custom run time values to tools.
|
143 |
+
|
144 |
+
If you need to access the [RunnableConfig](/docs/concepts/runnables/#runnableconfig) object from within a tool. This can be done by using the `RunnableConfig` annotation in the tool's function signature.
|
145 |
+
|
146 |
+
```python
|
147 |
+
from langchain_core.runnables import RunnableConfig
|
148 |
+
|
149 |
+
@tool
|
150 |
+
async def some_func(..., config: RunnableConfig) -> ...:
|
151 |
+
"""Tool that does something."""
|
152 |
+
# do something with config
|
153 |
+
...
|
154 |
+
|
155 |
+
await some_func.ainvoke(..., config={"configurable": {"value": "some_value"}})
|
156 |
+
```
|
157 |
+
|
158 |
+
The `config` will not be part of the tool's schema and will be injected at runtime with appropriate values.
|
159 |
+
|
160 |
+
:::note
|
161 |
+
You may need to access the `config` object to manually propagate it to subclass. This happens if you're working with python 3.9 / 3.10 in an [async](/docs/concepts/async) environment and need to manually propagate the `config` object to sub-calls.
|
162 |
+
|
163 |
+
Please read [Propagation RunnableConfig](/docs/concepts/runnables/#propagation-of-runnableconfig) for more details to learn how to propagate the `RunnableConfig` down the call chain manually (or upgrade to Python 3.11 where this is no longer an issue).
|
164 |
+
:::
|
165 |
+
|
166 |
+
### InjectedState
|
167 |
+
|
168 |
+
Please see the [InjectedState](https://langchain-ai.github.io/langgraph/reference/prebuilt/#langgraph.prebuilt.tool_node.InjectedState) documentation for more details.
|
169 |
+
|
170 |
+
### InjectedStore
|
171 |
+
|
172 |
+
Please see the [InjectedStore](https://langchain-ai.github.io/langgraph/reference/prebuilt/#langgraph.prebuilt.tool_node.InjectedStore) documentation for more details.
|
173 |
+
|
174 |
+
## Best practices
|
175 |
+
|
176 |
+
When designing tools to be used by models, keep the following in mind:
|
177 |
+
|
178 |
+
- Tools that are well-named, correctly-documented and properly type-hinted are easier for models to use.
|
179 |
+
- Design simple and narrowly scoped tools, as they are easier for models to use correctly.
|
180 |
+
- Use chat models that support [tool-calling](/docs/concepts/tool_calling) APIs to take advantage of tools.
|
181 |
+
|
182 |
+
|
183 |
+
## Toolkits
|
184 |
+
<span data-heading-keywords="toolkit,toolkits"></span>
|
185 |
+
|
186 |
+
LangChain has a concept of **toolkits**. This a very thin abstraction that groups tools together that
|
187 |
+
are designed to be used together for specific tasks.
|
188 |
+
|
189 |
+
### Interface
|
190 |
+
|
191 |
+
All Toolkits expose a `get_tools` method which returns a list of tools. You can therefore do:
|
192 |
+
|
193 |
+
```python
|
194 |
+
# Initialize a toolkit
|
195 |
+
toolkit = ExampleTookit(...)
|
196 |
+
|
197 |
+
# Get list of tools
|
198 |
+
tools = toolkit.get_tools()
|
199 |
+
```
|
200 |
+
|
201 |
+
## Related resources
|
202 |
+
|
203 |
+
See the following resources for more information:
|
204 |
+
|
205 |
+
- [API Reference for @tool](https://python.langchain.com/api_reference/core/tools/langchain_core.tools.convert.tool.html)
|
206 |
+
- [How to create custom tools](/docs/how_to/custom_tools/)
|
207 |
+
- [How to pass run time values to tools](/docs/how_to/tool_runtime/)
|
208 |
+
- [All LangChain tool how-to guides](https://docs.langchain.com/docs/how_to/#tools)
|
209 |
+
- [Additional how-to guides that show usage with LangGraph](https://langchain-ai.github.io/langgraph/how-tos/tool-calling/)
|
210 |
+
- Tool integrations, see the [tool integration docs](https://docs.langchain.com/docs/integrations/tools/).
|
211 |
+
|
langchain_md_files/concepts/tracing.mdx
ADDED
@@ -0,0 +1,10 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Tracing
|
2 |
+
|
3 |
+
<span data-heading-keywords="trace,tracing"></span>
|
4 |
+
|
5 |
+
A trace is essentially a series of steps that your application takes to go from input to output.
|
6 |
+
Traces contain individual steps called `runs`. These can be individual calls from a model, retriever,
|
7 |
+
tool, or sub-chains.
|
8 |
+
Tracing gives you observability inside your chains and agents, and is vital in diagnosing issues.
|
9 |
+
|
10 |
+
For a deeper dive, check out [this LangSmith conceptual guide](https://docs.smith.langchain.com/concepts/tracing).
|
langchain_md_files/concepts/vectorstores.mdx
ADDED
@@ -0,0 +1,191 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Vector stores
|
2 |
+
<span data-heading-keywords="vector,vectorstore,vectorstores,vector store,vector stores"></span>
|
3 |
+
|
4 |
+
:::info[Prerequisites]
|
5 |
+
|
6 |
+
* [Embeddings](/docs/concepts/embedding_models/)
|
7 |
+
* [Text splitters](/docs/concepts/text_splitters/)
|
8 |
+
|
9 |
+
:::
|
10 |
+
:::info[Note]
|
11 |
+
|
12 |
+
This conceptual overview focuses on text-based indexing and retrieval for simplicity.
|
13 |
+
However, embedding models can be [multi-modal](https://cloud.google.com/vertex-ai/generative-ai/docs/embeddings/get-multimodal-embeddings)
|
14 |
+
and vector stores can be used to store and retrieve a variety of data types beyond text.
|
15 |
+
:::
|
16 |
+
|
17 |
+
## Overview
|
18 |
+
|
19 |
+
Vector stores are specialized data stores that enable indexing and retrieving information based on vector representations.
|
20 |
+
|
21 |
+
These vectors, called [embeddings](/docs/concepts/embedding_models/), capture the semantic meaning of data that has been embedded.
|
22 |
+
|
23 |
+
Vector stores are frequently used to search over unstructured data, such as text, images, and audio, to retrieve relevant information based on semantic similarity rather than exact keyword matches.
|
24 |
+
|
25 |
+

|
26 |
+
|
27 |
+
## Integrations
|
28 |
+
|
29 |
+
LangChain has a large number of vectorstore integrations, allowing users to easily switch between different vectorstore implementations.
|
30 |
+
|
31 |
+
Please see the [full list of LangChain vectorstore integrations](/docs/integrations/vectorstores/).
|
32 |
+
|
33 |
+
## Interface
|
34 |
+
|
35 |
+
LangChain provides a standard interface for working with vector stores, allowing users to easily switch between different vectorstore implementations.
|
36 |
+
|
37 |
+
The interface consists of basic methods for writing, deleting and searching for documents in the vector store.
|
38 |
+
|
39 |
+
The key methods are:
|
40 |
+
|
41 |
+
- `add_documents`: Add a list of texts to the vector store.
|
42 |
+
- `delete`: Delete a list of documents from the vector store.
|
43 |
+
- `similarity_search`: Search for similar documents to a given query.
|
44 |
+
|
45 |
+
|
46 |
+
## Initialization
|
47 |
+
|
48 |
+
Most vectors in LangChain accept an embedding model as an argument when initializing the vector store.
|
49 |
+
|
50 |
+
We will use LangChain's [InMemoryVectorStore](https://python.langchain.com/api_reference/core/vectorstores/langchain_core.vectorstores.in_memory.InMemoryVectorStore.html) implementation to illustrate the API.
|
51 |
+
|
52 |
+
```python
|
53 |
+
from langchain_core.vectorstores import InMemoryVectorStore
|
54 |
+
# Initialize with an embedding model
|
55 |
+
vector_store = InMemoryVectorStore(embedding=SomeEmbeddingModel())
|
56 |
+
```
|
57 |
+
|
58 |
+
## Adding documents
|
59 |
+
|
60 |
+
To add documents, use the `add_documents` method.
|
61 |
+
|
62 |
+
This API works with a list of [Document](https://python.langchain.com/api_reference/core/documents/langchain_core.documents.base.Document.html) objects.
|
63 |
+
`Document` objects all have `page_content` and `metadata` attributes, making them a universal way to store unstructured text and associated metadata.
|
64 |
+
|
65 |
+
```python
|
66 |
+
from langchain_core.documents import Document
|
67 |
+
|
68 |
+
document_1 = Document(
|
69 |
+
page_content="I had chocalate chip pancakes and scrambled eggs for breakfast this morning.",
|
70 |
+
metadata={"source": "tweet"},
|
71 |
+
)
|
72 |
+
|
73 |
+
document_2 = Document(
|
74 |
+
page_content="The weather forecast for tomorrow is cloudy and overcast, with a high of 62 degrees.",
|
75 |
+
metadata={"source": "news"},
|
76 |
+
)
|
77 |
+
|
78 |
+
documents = [document_1, document_2]
|
79 |
+
|
80 |
+
vector_store.add_documents(documents=documents)
|
81 |
+
```
|
82 |
+
|
83 |
+
You should usually provide IDs for the documents you add to the vector store, so
|
84 |
+
that instead of adding the same document multiple times, you can update the existing document.
|
85 |
+
|
86 |
+
```python
|
87 |
+
vector_store.add_documents(documents=documents, ids=["doc1", "doc2"])
|
88 |
+
```
|
89 |
+
|
90 |
+
## Delete
|
91 |
+
|
92 |
+
To delete documents, use the `delete` method which takes a list of document IDs to delete.
|
93 |
+
|
94 |
+
```python
|
95 |
+
vector_store.delete(ids=["doc1"])
|
96 |
+
```
|
97 |
+
|
98 |
+
## Search
|
99 |
+
|
100 |
+
Vector stores embed and store the documents that added.
|
101 |
+
If we pass in a query, the vectorstore will embed the query, perform a similarity search over the embedded documents, and return the most similar ones.
|
102 |
+
This captures two important concepts: first, there needs to be a way to measure the similarity between the query and *any* [embedded](/docs/concepts/embedding_models/) document.
|
103 |
+
Second, there needs to be an algorithm to efficiently perform this similarity search across *all* embedded documents.
|
104 |
+
|
105 |
+
### Similarity metrics
|
106 |
+
|
107 |
+
A critical advantage of embeddings vectors is they can be compared using many simple mathematical operations:
|
108 |
+
|
109 |
+
- **Cosine Similarity**: Measures the cosine of the angle between two vectors.
|
110 |
+
- **Euclidean Distance**: Measures the straight-line distance between two points.
|
111 |
+
- **Dot Product**: Measures the projection of one vector onto another.
|
112 |
+
|
113 |
+
The choice of similarity metric can sometimes be selected when initializing the vectorstore. Please refer
|
114 |
+
to the documentation of the specific vectorstore you are using to see what similarity metrics are supported.
|
115 |
+
|
116 |
+
:::info[Further reading]
|
117 |
+
|
118 |
+
* See [this documentation](https://developers.google.com/machine-learning/clustering/dnn-clustering/supervised-similarity) from Google on similarity metrics to consider with embeddings.
|
119 |
+
* See Pinecone's [blog post](https://www.pinecone.io/learn/vector-similarity/) on similarity metrics.
|
120 |
+
* See OpenAI's [FAQ](https://platform.openai.com/docs/guides/embeddings/faq) on what similarity metric to use with OpenAI embeddings.
|
121 |
+
|
122 |
+
:::
|
123 |
+
|
124 |
+
### Similarity search
|
125 |
+
|
126 |
+
Given a similarity metric to measure the distance between the embedded query and any embedded document, we need an algorithm to efficiently search over *all* the embedded documents to find the most similar ones.
|
127 |
+
There are various ways to do this. As an example, many vectorstores implement [HNSW (Hierarchical Navigable Small World)](https://www.pinecone.io/learn/series/faiss/hnsw/), a graph-based index structure that allows for efficient similarity search.
|
128 |
+
Regardless of the search algorithm used under the hood, the LangChain vectorstore interface has a `similarity_search` method for all integrations.
|
129 |
+
This will take the search query, create an embedding, find similar documents, and return them as a list of [Documents](https://python.langchain.com/api_reference/core/documents/langchain_core.documents.base.Document.html).
|
130 |
+
|
131 |
+
```python
|
132 |
+
query = "my query"
|
133 |
+
docs = vectorstore.similarity_search(query)
|
134 |
+
```
|
135 |
+
|
136 |
+
Many vectorstores support search parameters to be passed with the `similarity_search` method. See the documentation for the specific vectorstore you are using to see what parameters are supported.
|
137 |
+
As an example [Pinecone](https://python.langchain.com/api_reference/pinecone/vectorstores/langchain_pinecone.vectorstores.PineconeVectorStore.html#langchain_pinecone.vectorstores.PineconeVectorStore.similarity_search) several parameters that are important general concepts:
|
138 |
+
Many vectorstores support [the `k`](/docs/integrations/vectorstores/pinecone/#query-directly), which controls the number of Documents to return, and `filter`, which allows for filtering documents by metadata.
|
139 |
+
|
140 |
+
- `query (str) – Text to look up documents similar to.`
|
141 |
+
- `k (int) – Number of Documents to return. Defaults to 4.`
|
142 |
+
- `filter (dict | None) – Dictionary of argument(s) to filter on metadata`
|
143 |
+
|
144 |
+
:::info[Further reading]
|
145 |
+
|
146 |
+
* See the [how-to guide](/docs/how_to/vectorstores/) for more details on how to use the `similarity_search` method.
|
147 |
+
* See the [integrations page](/docs/integrations/vectorstores/) for more details on arguments that can be passed in to the `similarity_search` method for specific vectorstores.
|
148 |
+
|
149 |
+
:::
|
150 |
+
|
151 |
+
### Metadata filtering
|
152 |
+
|
153 |
+
While vectorstore implement a search algorithm to efficiently search over *all* the embedded documents to find the most similar ones, many also support filtering on metadata.
|
154 |
+
Metadata filtering helps narrow down the search by applying specific conditions such as retrieving documents from a particular source or date range. These two concepts work well together:
|
155 |
+
|
156 |
+
1. **Semantic search**: Query the unstructured data directly, often via embedding or keyword similarity.
|
157 |
+
2. **Metadata search**: Apply structured query to the metadata, filtering specific documents.
|
158 |
+
|
159 |
+
Vector store support for metadata filtering is typically dependent on the underlying vector store implementation.
|
160 |
+
|
161 |
+
Here is example usage with [Pinecone](/docs/integrations/vectorstores/pinecone/#query-directly), showing that we filter for all documents that have the metadata key `source` with value `tweet`.
|
162 |
+
|
163 |
+
```python
|
164 |
+
vectorstore.similarity_search(
|
165 |
+
"LangChain provides abstractions to make working with LLMs easy",
|
166 |
+
k=2,
|
167 |
+
filter={"source": "tweet"},
|
168 |
+
)
|
169 |
+
```
|
170 |
+
|
171 |
+
:::info[Further reading]
|
172 |
+
|
173 |
+
* See Pinecone's [documentation](https://docs.pinecone.io/guides/data/filter-with-metadata) on filtering with metadata.
|
174 |
+
* See the [list of LangChain vectorstore integrations](/docs/integrations/retrievers/self_query/) that support metadata filtering.
|
175 |
+
|
176 |
+
:::
|
177 |
+
|
178 |
+
## Advanced search and retrieval techniques
|
179 |
+
|
180 |
+
While algorithms like HNSW provide the foundation for efficient similarity search in many cases, additional techniques can be employed to improve search quality and diversity.
|
181 |
+
For example, [maximal marginal relevance](https://python.langchain.com/v0.1/docs/modules/model_io/prompts/example_selectors/mmr/) is a re-ranking algorithm used to diversify search results, which is applied after the initial similarity search to ensure a more diverse set of results.
|
182 |
+
As a second example, some [vector stores](/docs/integrations/retrievers/pinecone_hybrid_search/) offer built-in [hybrid-search](https://docs.pinecone.io/guides/data/understanding-hybrid-search) to combine keyword and semantic similarity search, which marries the benefits of both approaches.
|
183 |
+
At the moment, there is no unified way to perform hybrid search using LangChain vectorstores, but it is generally exposed as a keyword argument that is passed in with `similarity_search`.
|
184 |
+
See this [how-to guide on hybrid search](/docs/how_to/hybrid/) for more details.
|
185 |
+
|
186 |
+
| Name | When to use | Description |
|
187 |
+
|-------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------|
|
188 |
+
| [Hybrid search](/docs/integrations/retrievers/pinecone_hybrid_search/) | When combining keyword-based and semantic similarity. | Hybrid search combines keyword and semantic similarity, marrying the benefits of both approaches. [Paper](https://arxiv.org/abs/2210.11934). |
|
189 |
+
| [Maximal Marginal Relevance (MMR)](https://python.langchain.com/api_reference/pinecone/vectorstores/langchain_pinecone.vectorstores.PineconeVectorStore.html#langchain_pinecone.vectorstores.PineconeVectorStore.max_marginal_relevance_search) | When needing to diversify search results. | MMR attempts to diversify the results of a search to avoid returning similar and redundant documents. |
|
190 |
+
|
191 |
+
|
langchain_md_files/concepts/why_langchain.mdx
ADDED
@@ -0,0 +1,109 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Why LangChain?
|
2 |
+
|
3 |
+
The goal of `langchain` the Python package and LangChain the company is to make it as easy as possible for developers to build applications that reason.
|
4 |
+
While LangChain originally started as a single open source package, it has evolved into a company and a whole ecosystem.
|
5 |
+
This page will talk about the LangChain ecosystem as a whole.
|
6 |
+
Most of the components within the LangChain ecosystem can be used by themselves - so if you feel particularly drawn to certain components but not others, that is totally fine! Pick and choose whichever components you like best for your own use case!
|
7 |
+
|
8 |
+
## Features
|
9 |
+
|
10 |
+
There are several primary needs that LangChain aims to address:
|
11 |
+
|
12 |
+
1. **Standardized component interfaces:** The growing number of [models](/docs/integrations/chat/) and [related components](/docs/integrations/vectorstores/) for AI applications has resulted in a wide variety of different APIs that developers need to learn and use.
|
13 |
+
This diversity can make it challenging for developers to switch between providers or combine components when building applications.
|
14 |
+
LangChain exposes a standard interface for key components, making it easy to switch between providers.
|
15 |
+
|
16 |
+
2. **Orchestration:** As applications become more complex, combining multiple components and models, there's [a growing need to efficiently connect these elements into control flows](https://lilianweng.github.io/posts/2023-06-23-agent/) that can [accomplish diverse tasks](https://www.sequoiacap.com/article/generative-ais-act-o1/).
|
17 |
+
[Orchestration](https://en.wikipedia.org/wiki/Orchestration_(computing)) is crucial for building such applications.
|
18 |
+
|
19 |
+
3. **Observability and evaluation:** As applications become more complex, it becomes increasingly difficult to understand what is happening within them.
|
20 |
+
Furthermore, the pace of development can become rate-limited by the [paradox of choice](https://en.wikipedia.org/wiki/Paradox_of_choice).
|
21 |
+
For example, developers often wonder how to engineer their prompt or which LLM best balances accuracy, latency, and cost.
|
22 |
+
[Observability](https://en.wikipedia.org/wiki/Observability) and evaluations can help developers monitor their applications and rapidly answer these types of questions with confidence.
|
23 |
+
|
24 |
+
|
25 |
+
## Standardized component interfaces
|
26 |
+
|
27 |
+
LangChain provides common interfaces for components that are central to many AI applications.
|
28 |
+
As an example, all [chat models](/docs/concepts/chat_models/) implement the [BaseChatModel](https://python.langchain.com/api_reference/core/language_models/langchain_core.language_models.chat_models.BaseChatModel.html) interface.
|
29 |
+
This provides a standard way to interact with chat models, supporting important but often provider-specific features like [tool calling](/docs/concepts/tool_calling/) and [structured outputs](/docs/concepts/structured_outputs/).
|
30 |
+
|
31 |
+
|
32 |
+
### Example: chat models
|
33 |
+
|
34 |
+
Many [model providers](/docs/concepts/chat_models/) support [tool calling](/docs/concepts/tool_calling/), a critical feature for many applications (e.g., [agents](https://langchain-ai.github.io/langgraph/concepts/agentic_concepts/)), that allows a developer to request model responses that match a particular schema.
|
35 |
+
The APIs for each provider differ.
|
36 |
+
LangChain's [chat model](/docs/concepts/chat_models/) interface provides a common way to bind [tools](/docs/concepts/tools) to a model in order to support [tool calling](/docs/concepts/tool_calling/):
|
37 |
+
|
38 |
+
```python
|
39 |
+
# Tool creation
|
40 |
+
tools = [my_tool]
|
41 |
+
# Tool binding
|
42 |
+
model_with_tools = model.bind_tools(tools)
|
43 |
+
```
|
44 |
+
|
45 |
+
Similarly, getting models to produce [structured outputs](/docs/concepts/structured_outputs/) is an extremely common use case.
|
46 |
+
Providers support different approaches for this, including [JSON mode or tool calling](https://platform.openai.com/docs/guides/structured-outputs), with different APIs.
|
47 |
+
LangChain's [chat model](/docs/concepts/chat_models/) interface provides a common way to produce structured outputs using the `with_structured_output()` method:
|
48 |
+
|
49 |
+
```python
|
50 |
+
# Define schema
|
51 |
+
schema = ...
|
52 |
+
# Bind schema to model
|
53 |
+
model_with_structure = model.with_structured_output(schema)
|
54 |
+
```
|
55 |
+
|
56 |
+
### Example: retrievers
|
57 |
+
|
58 |
+
In the context of [RAG](/docs/concepts/rag/) and LLM application components, LangChain's [retriever](/docs/concepts/retrievers/) interface provides a standard way to connect to many different types of data services or databases (e.g., [vector stores](/docs/concepts/vectorstores) or databases).
|
59 |
+
The underlying implementation of the retriever depends on the type of data store or database you are connecting to, but all retrievers implement the [runnable interface](/docs/concepts/runnables/), meaning they can be invoked in a common manner.
|
60 |
+
|
61 |
+
```python
|
62 |
+
documents = my_retriever.invoke("What is the meaning of life?")
|
63 |
+
```
|
64 |
+
|
65 |
+
## Orchestration
|
66 |
+
|
67 |
+
While standardization for individual components is useful, we've increasingly seen that developers want to *combine* components into more complex applications.
|
68 |
+
This motivates the need for [orchestration](https://en.wikipedia.org/wiki/Orchestration_(computing)).
|
69 |
+
There are several common characteristics of LLM applications that this orchestration layer should support:
|
70 |
+
|
71 |
+
* **Complex control flow:** The application requires complex patterns such as cycles (e.g., a loop that reiterates until a condition is met).
|
72 |
+
* **[Persistence](https://langchain-ai.github.io/langgraph/concepts/persistence/):** The application needs to maintain [short-term and / or long-term memory](https://langchain-ai.github.io/langgraph/concepts/memory/).
|
73 |
+
* **[Human-in-the-loop](https://langchain-ai.github.io/langgraph/concepts/human_in_the_loop/):** The application needs human interaction, e.g., pausing, reviewing, editing, approving certain steps.
|
74 |
+
|
75 |
+
The recommended way to orchestrate components for complex applications is [LangGraph](https://langchain-ai.github.io/langgraph/concepts/high_level/).
|
76 |
+
LangGraph is a library that gives developers a high degree of control by expressing the flow of the application as a set of nodes and edges.
|
77 |
+
LangGraph comes with built-in support for [persistence](https://langchain-ai.github.io/langgraph/concepts/persistence/), [human-in-the-loop](https://langchain-ai.github.io/langgraph/concepts/human_in_the_loop/), [memory](https://langchain-ai.github.io/langgraph/concepts/memory/), and other features.
|
78 |
+
It's particularly well suited for building [agents](https://langchain-ai.github.io/langgraph/concepts/agentic_concepts/) or [multi-agent](https://langchain-ai.github.io/langgraph/concepts/multi_agent/) applications.
|
79 |
+
Importantly, individual LangChain components can be used as LangGraph nodes, but you can also use LangGraph **without** using LangChain components.
|
80 |
+
|
81 |
+
:::info[Further reading]
|
82 |
+
|
83 |
+
Have a look at our free course, [Introduction to LangGraph](https://academy.langchain.com/courses/intro-to-langgraph), to learn more about how to use LangGraph to build complex applications.
|
84 |
+
|
85 |
+
:::
|
86 |
+
|
87 |
+
## Observability and evaluation
|
88 |
+
|
89 |
+
The pace of AI application development is often rate-limited by high-quality evaluations because there is a paradox of choice.
|
90 |
+
Developers often wonder how to engineer their prompt or which LLM best balances accuracy, latency, and cost.
|
91 |
+
High quality tracing and evaluations can help you rapidly answer these types of questions with confidence.
|
92 |
+
[LangSmith](https://docs.smith.langchain.com/) is our platform that supports observability and evaluation for AI applications.
|
93 |
+
See our conceptual guides on [evaluations](https://docs.smith.langchain.com/concepts/evaluation) and [tracing](https://docs.smith.langchain.com/concepts/tracing) for more details.
|
94 |
+
|
95 |
+
:::info[Further reading]
|
96 |
+
|
97 |
+
See our video playlist on [LangSmith tracing and evaluations](https://youtube.com/playlist?list=PLfaIDFEXuae0um8Fj0V4dHG37fGFU8Q5S&feature=shared) for more details.
|
98 |
+
|
99 |
+
:::
|
100 |
+
|
101 |
+
## Conclusion
|
102 |
+
|
103 |
+
LangChain offers standard interfaces for components that are central to many AI applications, which offers a few specific advantages:
|
104 |
+
- **Ease of swapping providers:** It allows you to swap out different component providers without having to change the underlying code.
|
105 |
+
- **Advanced features:** It provides common methods for more advanced features, such as [streaming](/docs/concepts/streaming) and [tool calling](/docs/concepts/tool_calling/).
|
106 |
+
|
107 |
+
[LangGraph](https://langchain-ai.github.io/langgraph/concepts/high_level/) makes it possible to orchestrate complex applications (e.g., [agents](/docs/concepts/agents/)) and provide features like including [persistence](https://langchain-ai.github.io/langgraph/concepts/persistence/), [human-in-the-loop](https://langchain-ai.github.io/langgraph/concepts/human_in_the_loop/), or [memory](https://langchain-ai.github.io/langgraph/concepts/memory/).
|
108 |
+
|
109 |
+
[LangSmith](https://docs.smith.langchain.com/) makes it possible to iterate with confidence on your applications, by providing LLM-specific observability and framework for testing and evaluating your application.
|
langchain_md_files/contributing/how_to/code/guidelines.mdx
ADDED
@@ -0,0 +1,35 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# General guidelines
|
2 |
+
|
3 |
+
Here are some things to keep in mind for all types of contributions:
|
4 |
+
|
5 |
+
- Follow the ["fork and pull request"](https://docs.github.com/en/get-started/exploring-projects-on-github/contributing-to-a-project) workflow.
|
6 |
+
- Fill out the checked-in pull request template when opening pull requests. Note related issues and tag relevant maintainers.
|
7 |
+
- Ensure your PR passes formatting, linting, and testing checks before requesting a review.
|
8 |
+
- If you would like comments or feedback on your current progress, please open an issue or discussion and tag a maintainer.
|
9 |
+
- See the sections on [Testing](setup.mdx#testing) and [Formatting and Linting](setup.mdx#formatting-and-linting) for how to run these checks locally.
|
10 |
+
- Backwards compatibility is key. Your changes must not be breaking, except in case of critical bug and security fixes.
|
11 |
+
- Look for duplicate PRs or issues that have already been opened before opening a new one.
|
12 |
+
- Keep scope as isolated as possible. As a general rule, your changes should not affect more than one package at a time.
|
13 |
+
|
14 |
+
## Bugfixes
|
15 |
+
|
16 |
+
We encourage and appreciate bugfixes. We ask that you:
|
17 |
+
|
18 |
+
- Explain the bug in enough detail for maintainers to be able to reproduce it.
|
19 |
+
- If an accompanying issue exists, link to it. Prefix with `Fixes` so that the issue will close automatically when the PR is merged.
|
20 |
+
- Avoid breaking changes if possible.
|
21 |
+
- Include unit tests that fail without the bugfix.
|
22 |
+
|
23 |
+
If you come across a bug and don't know how to fix it, we ask that you open an issue for it describing in detail the environment in which you encountered the bug.
|
24 |
+
|
25 |
+
## New features
|
26 |
+
|
27 |
+
We aim to keep the bar high for new features. We generally don't accept new core abstractions, changes to infra, changes to dependencies,
|
28 |
+
or new agents/chains from outside contributors without an existing GitHub discussion or issue that demonstrates an acute need for them.
|
29 |
+
|
30 |
+
- New features must come with docs, unit tests, and (if appropriate) integration tests.
|
31 |
+
- New integrations must come with docs, unit tests, and (if appropriate) integration tests.
|
32 |
+
- See [this page](../integrations/index.mdx) for more details on contributing new integrations.
|
33 |
+
- New functionality should not inherit from or use deprecated methods or classes.
|
34 |
+
- We will reject features that are likely to lead to security vulnerabilities or reports.
|
35 |
+
- Do not add any hard dependencies. Integrations may add optional dependencies.
|
langchain_md_files/contributing/how_to/code/index.mdx
ADDED
@@ -0,0 +1,6 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Contribute Code
|
2 |
+
|
3 |
+
If you would like to add a new feature or update an existing one, please read the resources below before getting started:
|
4 |
+
|
5 |
+
- [General guidelines](guidelines.mdx)
|
6 |
+
- [Setup](setup.mdx)
|