# Why LangChain? | |
The goal of `langchain` the Python package and LangChain the company is to make it as easy as possible for developers to build applications that reason. | |
While LangChain originally started as a single open source package, it has evolved into a company and a whole ecosystem. | |
This page will talk about the LangChain ecosystem as a whole. | |
Most of the components within the LangChain ecosystem can be used by themselves - so if you feel particularly drawn to certain components but not others, that is totally fine! Pick and choose whichever components you like best for your own use case! | |
## Features | |
There are several primary needs that LangChain aims to address: | |
1. **Standardized component interfaces:** The growing number of [models](/docs/integrations/chat/) and [related components](/docs/integrations/vectorstores/) for AI applications has resulted in a wide variety of different APIs that developers need to learn and use. | |
This diversity can make it challenging for developers to switch between providers or combine components when building applications. | |
LangChain exposes a standard interface for key components, making it easy to switch between providers. | |
2. **Orchestration:** As applications become more complex, combining multiple components and models, there's [a growing need to efficiently connect these elements into control flows](https://lilianweng.github.io/posts/2023-06-23-agent/) that can [accomplish diverse tasks](https://www.sequoiacap.com/article/generative-ais-act-o1/). | |
[Orchestration](https://en.wikipedia.org/wiki/Orchestration_(computing)) is crucial for building such applications. | |
3. **Observability and evaluation:** As applications become more complex, it becomes increasingly difficult to understand what is happening within them. | |
Furthermore, the pace of development can become rate-limited by the [paradox of choice](https://en.wikipedia.org/wiki/Paradox_of_choice). | |
For example, developers often wonder how to engineer their prompt or which LLM best balances accuracy, latency, and cost. | |
[Observability](https://en.wikipedia.org/wiki/Observability) and evaluations can help developers monitor their applications and rapidly answer these types of questions with confidence. | |
## Standardized component interfaces | |
LangChain provides common interfaces for components that are central to many AI applications. | |
As an example, all [chat models](/docs/concepts/chat_models/) implement the [BaseChatModel](https://python.langchain.com/api_reference/core/language_models/langchain_core.language_models.chat_models.BaseChatModel.html) interface. | |
This provides a standard way to interact with chat models, supporting important but often provider-specific features like [tool calling](/docs/concepts/tool_calling/) and [structured outputs](/docs/concepts/structured_outputs/). | |
### Example: chat models | |
Many [model providers](/docs/concepts/chat_models/) support [tool calling](/docs/concepts/tool_calling/), a critical feature for many applications (e.g., [agents](https://langchain-ai.github.io/langgraph/concepts/agentic_concepts/)), that allows a developer to request model responses that match a particular schema. | |
The APIs for each provider differ. | |
LangChain's [chat model](/docs/concepts/chat_models/) interface provides a common way to bind [tools](/docs/concepts/tools) to a model in order to support [tool calling](/docs/concepts/tool_calling/): | |
```python | |
# Tool creation | |
tools = [my_tool] | |
# Tool binding | |
model_with_tools = model.bind_tools(tools) | |
``` | |
Similarly, getting models to produce [structured outputs](/docs/concepts/structured_outputs/) is an extremely common use case. | |
Providers support different approaches for this, including [JSON mode or tool calling](https://platform.openai.com/docs/guides/structured-outputs), with different APIs. | |
LangChain's [chat model](/docs/concepts/chat_models/) interface provides a common way to produce structured outputs using the `with_structured_output()` method: | |
```python | |
# Define schema | |
schema = ... | |
# Bind schema to model | |
model_with_structure = model.with_structured_output(schema) | |
``` | |
### Example: retrievers | |
In the context of [RAG](/docs/concepts/rag/) and LLM application components, LangChain's [retriever](/docs/concepts/retrievers/) interface provides a standard way to connect to many different types of data services or databases (e.g., [vector stores](/docs/concepts/vectorstores) or databases). | |
The underlying implementation of the retriever depends on the type of data store or database you are connecting to, but all retrievers implement the [runnable interface](/docs/concepts/runnables/), meaning they can be invoked in a common manner. | |
```python | |
documents = my_retriever.invoke("What is the meaning of life?") | |
``` | |
## Orchestration | |
While standardization for individual components is useful, we've increasingly seen that developers want to *combine* components into more complex applications. | |
This motivates the need for [orchestration](https://en.wikipedia.org/wiki/Orchestration_(computing)). | |
There are several common characteristics of LLM applications that this orchestration layer should support: | |
* **Complex control flow:** The application requires complex patterns such as cycles (e.g., a loop that reiterates until a condition is met). | |
* **[Persistence](https://langchain-ai.github.io/langgraph/concepts/persistence/):** The application needs to maintain [short-term and / or long-term memory](https://langchain-ai.github.io/langgraph/concepts/memory/). | |
* **[Human-in-the-loop](https://langchain-ai.github.io/langgraph/concepts/human_in_the_loop/):** The application needs human interaction, e.g., pausing, reviewing, editing, approving certain steps. | |
The recommended way to orchestrate components for complex applications is [LangGraph](https://langchain-ai.github.io/langgraph/concepts/high_level/). | |
LangGraph is a library that gives developers a high degree of control by expressing the flow of the application as a set of nodes and edges. | |
LangGraph comes with built-in support for [persistence](https://langchain-ai.github.io/langgraph/concepts/persistence/), [human-in-the-loop](https://langchain-ai.github.io/langgraph/concepts/human_in_the_loop/), [memory](https://langchain-ai.github.io/langgraph/concepts/memory/), and other features. | |
It's particularly well suited for building [agents](https://langchain-ai.github.io/langgraph/concepts/agentic_concepts/) or [multi-agent](https://langchain-ai.github.io/langgraph/concepts/multi_agent/) applications. | |
Importantly, individual LangChain components can be used as LangGraph nodes, but you can also use LangGraph **without** using LangChain components. | |
:::info[Further reading] | |
Have a look at our free course, [Introduction to LangGraph](https://academy.langchain.com/courses/intro-to-langgraph), to learn more about how to use LangGraph to build complex applications. | |
::: | |
## Observability and evaluation | |
The pace of AI application development is often rate-limited by high-quality evaluations because there is a paradox of choice. | |
Developers often wonder how to engineer their prompt or which LLM best balances accuracy, latency, and cost. | |
High quality tracing and evaluations can help you rapidly answer these types of questions with confidence. | |
[LangSmith](https://docs.smith.langchain.com/) is our platform that supports observability and evaluation for AI applications. | |
See our conceptual guides on [evaluations](https://docs.smith.langchain.com/concepts/evaluation) and [tracing](https://docs.smith.langchain.com/concepts/tracing) for more details. | |
:::info[Further reading] | |
See our video playlist on [LangSmith tracing and evaluations](https://youtube.com/playlist?list=PLfaIDFEXuae0um8Fj0V4dHG37fGFU8Q5S&feature=shared) for more details. | |
::: | |
## Conclusion | |
LangChain offers standard interfaces for components that are central to many AI applications, which offers a few specific advantages: | |
- **Ease of swapping providers:** It allows you to swap out different component providers without having to change the underlying code. | |
- **Advanced features:** It provides common methods for more advanced features, such as [streaming](/docs/concepts/streaming) and [tool calling](/docs/concepts/tool_calling/). | |
[LangGraph](https://langchain-ai.github.io/langgraph/concepts/high_level/) makes it possible to orchestrate complex applications (e.g., [agents](/docs/concepts/agents/)) and provide features like including [persistence](https://langchain-ai.github.io/langgraph/concepts/persistence/), [human-in-the-loop](https://langchain-ai.github.io/langgraph/concepts/human_in_the_loop/), or [memory](https://langchain-ai.github.io/langgraph/concepts/memory/). | |
[LangSmith](https://docs.smith.langchain.com/) makes it possible to iterate with confidence on your applications, by providing LLM-specific observability and framework for testing and evaluating your application. | |