# Tool calling | |
* [Tools](/docs/concepts/tools) | |
* [Chat Models](/docs/concepts/chat_models) | |
## Overview | |
Many AI applications interact directly with humans. In these cases, it is appropriate for models to respond in natural language. | |
But what about cases where we want a model to also interact *directly* with systems, such as databases or an API? | |
These systems often have a particular input schema; for example, APIs frequently have a required payload structure. | |
This need motivates the concept of *tool calling*. You can use [tool calling](https://platform.openai.com/docs/guides/function-calling/example-use-cases) to request model responses that match a particular schema. | |
You will sometimes hear the term `function calling`. We use this term interchangeably with `tool calling`. | |
 | |
## Key concepts | |
**(1) Tool Creation:** Use the [@tool](https://python.langchain.com/api_reference/core/tools/langchain_core.tools.convert.tool.html) decorator to create a [tool](/docs/concepts/tools). A tool is an association between a function and its schema. | |
**(2) Tool Binding:** The tool needs to be connected to a model that supports tool calling. This gives the model awareness of the tool and the associated input schema required by the tool. | |
**(3) Tool Calling:** When appropriate, the model can decide to call a tool and ensure its response conforms to the tool's input schema. | |
**(4) Tool Execution:** The tool can be executed using the arguments provided by the model. | |
 | |
## Recommended usage | |
This pseudo-code illustrates the recommended workflow for using tool calling. | |
Created tools are passed to `.bind_tools()` method as a list. | |
This model can be called, as usual. If a tool call is made, model's response will contain the tool call arguments. | |
The tool call arguments can be passed directly to the tool. | |
```python | |
# Tool creation | |
tools = [my_tool] | |
# Tool binding | |
model_with_tools = model.bind_tools(tools) | |
# Tool calling | |
response = model_with_tools.invoke(user_input) | |
``` | |
## Tool creation | |
The recommended way to create a tool is using the `@tool` decorator. | |
```python | |
from langchain_core.tools import tool | |
@tool | |
def multiply(a: int, b: int) -> int: | |
"""Multiply a and b.""" | |
return a * b | |
``` | |
reading] | |
* See our conceptual guide on [tools](/docs/concepts/tools/) for more details. | |
* See our [model integrations](/docs/integrations/chat/) that support tool calling. | |
* See our [how-to guide](/docs/how_to/tool_calling/) on tool calling. | |
## Tool binding | |
[Many](https://platform.openai.com/docs/guides/function-calling) [model providers](https://platform.openai.com/docs/guides/function-calling) support tool calling. | |
See our [model integration page](/docs/integrations/chat/) for a list of providers that support tool calling. | |
The central concept to understand is that LangChain provides a standardized interface for connecting tools to models. | |
The `.bind_tools()` method can be used to specify which tools are available for a model to call. | |
```python | |
model_with_tools = model.bind_tools(tools_list) | |
``` | |
As a specific example, let's take a function `multiply` and bind it as a tool to a model that supports tool calling. | |
```python | |
def multiply(a: int, b: int) -> int: | |
"""Multiply a and b. | |
Args: | |
a: first int | |
b: second int | |
""" | |
return a * b | |
llm_with_tools = tool_calling_model.bind_tools([multiply]) | |
``` | |
## Tool calling | |
 | |
A key principle of tool calling is that the model decides when to use a tool based on the input's relevance. The model doesn't always need to call a tool. | |
For example, given an unrelated input, the model would not call the tool: | |
```python | |
result = llm_with_tools.invoke("Hello world!") | |
``` | |
The result would be an `AIMessage` containing the model's response in natural language (e.g., "Hello!"). | |
However, if we pass an input *relevant to the tool*, the model should choose to call it: | |
```python | |
result = llm_with_tools.invoke("What is 2 multiplied by 3?") | |
``` | |
As before, the output `result` will be an `AIMessage`. | |
But, if the tool was called, `result` will have a `tool_calls` attribute. | |
This attribute includes everything needed to execute the tool, including the tool name and input arguments: | |
``` | |
result.tool_calls | |
{'name': 'multiply', 'args': {'a': 2, 'b': 3}, 'id': 'xxx', 'type': 'tool_call'} | |
``` | |
For more details on usage, see our [how-to guides](/docs/how_to/#tools)! | |
## Tool execution | |
[Tools](/docs/concepts/tools/) implement the [Runnable](/docs/concepts/runnables/) interface, which means that they can be invoked (e.g., `tool.invoke(args)`) directly. | |
[LangGraph](https://langchain-ai.github.io/langgraph/) offers pre-built components (e.g., [`ToolNode`](https://langchain-ai.github.io/langgraph/reference/prebuilt/#langgraph.prebuilt.tool_node.ToolNode)) that will often invoke the tool in behalf of the user. | |
reading] | |
* See our [how-to guide](/docs/how_to/tool_calling/) on tool calling. | |
* See the [LangGraph documentation on using ToolNode](https://langchain-ai.github.io/langgraph/how-tos/tool-calling/). | |
## Best practices | |
When designing [tools](/docs/concepts/tools/) to be used by a model, it is important to keep in mind that: | |
* Models that have explicit [tool-calling APIs](/docs/concepts/tool_calling) will be better at tool calling than non-fine-tuned models. | |
* Models will perform better if the tools have well-chosen names and descriptions. | |
* Simple, narrowly scoped tools are easier for models to use than complex tools. | |
* Asking the model to select from a large list of tools poses challenges for the model. | |