In this post we'll explore the tools in LangChain.
Large Language Models (LLMs) are powerful at generating text, reasoning, and summarizing but they have inherent limitations:
- No real-time web search- They cannot fetch fresh information beyond their training data. Though many of the LLMs now have web search capability integrated like GPT leverages Bing search, Gemini uses Google search.
- No API calling ability- LLMs operate in isolation, they cannot directly interact with external systems or services. That limitation becomes a roadblock if you want your LLM responses to be integrated with your APIs or some third-party platforms.
- No structured computation- LLMs aren’t built for precise math or database-style operations. They can approximate answers, but when it comes to exact calculations, SQL queries, or deterministic logic, they often fall short.
- No direct integration with workflows- They cannot trigger external actions like sending emails or querying CRMs.
These gaps make LLMs less useful in production environments where up to date data, structured outputs, and external integrations are critical. This is exactly where LangChain’s concept of tools comes in.
Tools in LangChain
Tools act as bridges between the LLM and the outside world, enabling it to fetch live information, perform structured computations, or call APIs seamlessly. In other words, you can say that, tools transform an isolated LLM into a connected, production‑ready system.
Behind the scenes, tools are just functions with well‑defined inputs and outputs. The chat model decides when to call them and what arguments to pass, based on the flow of the conversation. This decision process, known as tool calling, happens when the model detects that a user's request can't be answered with text alone and instead requires an external function, like fetching live data or performing a calculation.
Types of tools in LangChain
LangChain primarily categorizes tools into-
- Built-in Tools- LangChain itself provides lots of pre-configured tools covering many scenarios across search, API calling, DB tasks. Refer
https://docs.langchain.com/oss/python/integrations/tools for available built-in tools in LangChain.
- RequestsTool for API calls (RequestsGetTool, RequestsPostTool etc.)
- WikipediaQueryRun to connect to Wikipedia
- DuckDuckGo Search, Google Serper, Google Search etc. for online searches
- Python REPL Tool to execute arbitrary Python code within a shell environment, primarily used for complex calculations, data analysis and many more such built-in tools.
- Custom tools- You can create your own custom tool to encapsulate specific business logic. You can create custom tools in LangChain using the @tool decorator or by subclassing BaseTool.
- Toolkits: You can bundle related tools to accomplish specific tasks, such as the Gmail toolkit, GitHub toolkit, or SQL Database toolkit. For example, SQL Database toolkit contains the following tools-
- QuerySQLDatabaseTool- Which takes SQL query as input, output is a result from the database.
- InfoSQLDatabaseTool- Input is comma-separated list of tables, output is the schema and sample rows for those tables.
- ListSQLDatabaseTool- This tool is used to retrieve a comma-separated list of table names within a SQL database
- QuerySQLCheckerTool- This tool is used to double check if passed query is correct or not before executing it
Built-in Tools Examples in LangChain
Let’s see some examples of integrations with built-in tools.
- Using DuckDuckGo search
from langchain_community.tools import DuckDuckGoSearchRun
search = DuckDuckGoSearchRun()
result = search.invoke("List 5 main sports events of 2026.")
print(result)
- Using RequestsGetTool to call an API get method. Jsonplaceholder free online REST API is used here.
from langchain_community.tools import RequestsGetTool
from langchain_community.utilities.requests import JsonRequestsWrapper
# Initialize the wrapper for handling responses
requests_wrapper = JsonRequestsWrapper()
# allow_dangerous_requests=True is mandatory for security compliance
# wrapper is also mandatory otherwise pydantic validator will fail since the tool expects a wrapper to be passed in
requests_tool = RequestsGetTool(requests_wrapper=requests_wrapper, allow_dangerous_requests=True) # Enable dangerous requests for demonstration
result = requests_tool.invoke("https://jsonplaceholder.typicode.com/posts?_limit=2")
print(result)
Output
[{'userId': 1, 'id': 1, 'title': 'sunt aut facere repellat provident occaecati excepturi optio reprehenderit', 'body': 'quia et suscipit\nsuscipit recusandae consequuntur expedita et cum\nreprehenderit molestiae ut ut quas totam\nnostrum rerum est autem sunt rem eveniet architecto'}, {'userId': 1, 'id': 2, 'title': 'qui est esse', 'body': 'est rerum tempore vitae\nsequi sint nihil reprehenderit dolor beatae ea dolores neque\nfugiat blanditiis voluptate porro vel nihil molestiae ut reiciendis\nqui aperiam non debitis possimus qui neque nisi nulla'}]
- Using Wikipedia tool
You must install the wikipedia Python package to use this integration.
from langchain_community.tools import WikipediaQueryRun
from langchain_community.utilities import WikipediaAPIWrapper
# Initialize the wrapper and tool
api_wrapper = WikipediaAPIWrapper(top_k_results=3, doc_content_chars_max=400)
wikipedia = WikipediaQueryRun(api_wrapper=api_wrapper)
result = wikipedia.invoke("Indian Economy")
print(result)
Custom Tools Example in LangChain
The simplest way to create a tool is with the @tool decorator. Let’s create a simple function to multiple two numbers as a tool in LangChain.
from langchain.tools import tool
@tool
def multiply(a: float, b: float) -> float:
"""Multiply two numbers."""
return a * b
result = multiply.invoke({"a": 3, "b": 4})
print(result)
Output
12.0
Points to note here-
- A function becomes a tool in LangChain by decorating it with @tool, which exposes it with a clear interface for the model to call.
- In LangChain, tools are Runnable objects by default. This means they can be executed with the .invoke() method, where you pass inputs as a dictionary that matches the tool’s defined schema.
- Type hints in the function are essential because they define the tool's input schema, ensuring the model knows exactly what arguments to expect. The docstring should be short yet descriptive, giving the model enough context to understand the tool’s purpose and when to use it.
- By default, the tool name comes from the function name. You can customize it by passing a name with @tool decorator, for example @tool("multiply_tool")
Metadata about tools
In LangChain, once you’ve defined a tool, you can inspect its arguments and description using built-in methods and attributes.
The most useful ones are:
- tool.args- Returns the input schema (usually a dictionary or Pydantic model) that defines the tool’s expected arguments.
- tool.description- Returns the docstring you provided, which explains the tool’s purpose.
- tool.name- Gives the tool’s registered name (helpful when binding multiple tools).
- tool.input_schema- Shows the full schema object for structured validation.
- tool.__doc__- Standard Python docstring access, same as description.
Example showing tool metadata:
from langchain.tools import tool
@tool
def multiply(a: float, b: float) -> float:
"""Multiply two numbers."""
return a * b
result = multiply.invoke({"a": 3, "b": 4})
print(result)
print("tool name:", multiply.name)
print("tool description:", multiply.description)
print("tool args:", multiply.args)
print("tool input schema:", multiply.input_schema)
print("tool docstring:", multiply.__doc__)
Output:
12.0
tool name: multiply
tool description: Multiply two numbers.
tool args: {'a': {'title': 'A', 'type': 'number'}, 'b': {'title': 'B', 'type': 'number'}}
tool input schema: <class 'langchain_core.utils.pydantic.multiply'>
tool docstring: Tool that can operate on any number of inputs.
How LLM sees a tool
When you decorate a function with @tool, LangChain wraps it into a Runnable object and exposes metadata that the LLM can read.
- Tool name- The identifier it can call (e.g., "multiply").
- Description- Taken from the docstring, which tells the model what the tool does and when it should be used.
- Arguments schema- Derived from type hints (or a Pydantic model), so the model knows what inputs are required and their types.
The LLM doesn’t actually read your Python code. Instead, it sees the tool’s name, description, and input schema. When it recognizes that a user request needs a tool call, it wraps tool’s name, description, and input schema and sends that information back as a ToolMessage, indicating that this tool needs to be executed to get the required information. This flow is covered in more detail in Tool calling.
Using Pydantic schema in tools
You can define complex input schema with Pydantic models or JSON schemas. Using args_schema parameter you can specify the input
schema for the tool.
It tells the LLM exactly what arguments the tool expects, their types, and their descriptions. Benefits you get are-
- Ensures that arguments passed to the tool match the expected types and constraints.
- Uses Pydantic models to enforce strict validation (e.g., int, str, Literal).
If you want the input to a tool to strictly follow a schema, for example, a Person model with fields name, address, address_type
(restricted to "home", "office", or "other"), and age. You can define this schema using a Pydantic model. That model is then
passed to the tool via the args_schema parameter, ensuring that all inputs are validated against the defined structure before
the tool executes.
from langchain.tools import tool
from typing import Literal
from pydantic import BaseModel, Field
class Person(BaseModel):
name: str = Field(description="The name of the person")
address: str = Field(description="The address of the person")
address_type: Literal["home", "office", "other"] = Field(description="The type of address 'home', 'office', or 'other'")
age: int = Field(description="The age of the person")
@tool(args_schema=Person)
def person_info_tool(name: str, address: str, address_type: str, age: int) -> str:
"""Tool to get information about a person."""
person = Person(
name=name,
address=address,
address_type=address_type,
age=age
)
return person.model_dump_json() # Return the person information as a JSON string
result = person_info_tool.invoke({
"name": "Ram",
"address": "123 VIP Street, New Delhi",
"address_type": "Resident",
"age": 30
})
print(result)
Output
{"name":"Ram","address":"123 VIP Street, New Delhi","address_type":"home","age":30}
Note that, when you define a tool with args_schema, the recommended style is:
- Use a Pydantic model to describe the arguments (with Field for descriptions).
- Unpack the fields directly in the function signature (instead of passing the whole model object).
Tool using Base Tool
Since all tools in LangChain inherit from the abstract class BaseTool, you can also create a custom tool by subclassing BaseTool directly and implementing its required methods.
To create a custom tool by subclassing BaseTool, you must define the following attributes and methods:
- name: A unique string identifying the tool for the model.
- description: A natural language explanation telling the model when and how to use the tool.
- args_schema: (Optional) A Pydantic BaseModel that validates the tool's input and informs the agent about required arguments.
- _run: The synchronous logic for the tool's execution.
- _arun: (Optional) The asynchronous implementation of the tool; if not defined, it will raise a NotImplementedError
Example of a custom tool:
Taking the same example of a multiply tool, you can also implement it by extending the BaseTool class directly, as shown below:
from pydantic import BaseModel, Field
from langchain_core.tools import BaseTool
from typing import Type
#pydantic model to define the input schema for the tool
class MultiplyArgs(BaseModel):
# ... means this field is required and has no default value.
a: float = Field(..., description="The first number to multiply.")
b: float = Field(..., description="The second number to multiply.")
class Multiply(BaseTool):
name: str = "multiply"
description: str = "Multiply two numbers together."
args_schema: Type[BaseModel] = MultiplyArgs
def _run(self, a: float, b: float) -> float:
"""Multiply two numbers."""
return a * b
def _arun(self, a: float, b: float) -> float:
"""Asynchronous version of the multiply tool."""
raise NotImplementedError("This tool does not support async yet.")
multiply_tool = Multiply()
result = multiply_tool.invoke({"a": 2.5, "b": 6.7})
print(result)
Output:
16.75
What to use @tool or BaseTool
Both @tool and BaseTool are valid ways to create tools in LangChain, but they serve slightly different purposes.
@tool Decorator is best used for quick, simple tool creation.
- Advantages:
- Minimal boilerplate, just write a function.
- Easy to add descriptions and schemas.
- Limitations:
- Lets less control over advanced behaviors (custom validation, async handling, logging).
BaseTool subclass is best used for complex, highly customized tools.
- Advantages:
- Full control over execution (sync/async, error handling, logging).
- Useful when building reusable, production-grade tools.
- Limitations:
- More boilerplate code.
- Slightly harder to maintain for simple use cases.
Toolkit in LangChain
In LangChain, a toolkit is a collection of related tools designed to be used together for specific tasks, such as interacting with a database, a JSON file, or an external API. One of the advantages of toolkit is reusability- Toolkits can be reused across different agents or workflows.
The steps for creating a custom toolkit in LangChain are as given below:
- Define Individual Tools
Before creating a toolkit, you must define the individual tools it will contain.
- Create the Toolkit Class
- Inherit from BaseToolkit: Create a custom class that inherits from langchain_core.tools.BaseToolkit.
- Implement get_tools(): Every toolkit must implement a get_tools() method that returns a list of the tools it contains.
Creating custom toolkit – LangChain Example
from langchain.tools import tool
from langchain.tools import BaseTool
from langchain_core.tools import BaseToolkit
from typing import List
@tool()
def multiply(a: float, b: float) -> float:
"""Multiply two numbers."""
return a * b
@tool()
def add(a: float, b: float) -> float:
"""Add two numbers."""
return a + b
class MyToolKit(BaseToolkit):
def get_tools(self) -> List[BaseTool]:
return [multiply, add]
# create an instance of the toolkit and retrieve tools
toolkit = MyToolKit()
tools = toolkit.get_tools()
for tool in tools:
print(f"Tool name: {tool.name}")
print(f"Tool description: {tool.description}")
print(f"Tool args: {tool.args}")
print(f"Tool input schema: {tool.input_schema}")
print(f"Tool docstring: {tool.__doc__}")
result = tool.invoke({"a": 5, "b": 3})
print(f"Result of invoking {tool.name}: {result}\n")
Use Cases for LangChain Tools with Agents
The following are some use cases for LangChain tools with agents:
- Web Search Tool
- Fetch the latest information such as stock prices, breaking news, or real time events.
- Math Tool
- Perform precise calculations or numerical reasoning that go beyond the LLM’s built in capabilities.
- Database Query Tool
- Retrive structured data like customer records, product inventories, or transaction histories from a connected database.
- Email Tool
- Draft, format, and send professional emails or notifications directly from within the agent workflow.
- GitHub Tool
- Interact with GitHub repositories- add or update files, fetch branch information, review pull requests, or automate project tasks.
That's all for this topic Tools in LangChain With Examples. If you have any doubt or any suggestions to make please drop a comment. Thanks!
Related Topics
-
LangChain Conversational RAG with Multi-user Sessions
-
Citation Aware RAG Application in LangChain
-
Embeddings in LangChain With Examples
-
Vector Stores in LangChain With Examples
-
RunnableBranch in LangChain With Examples
You may also like-
-
Messages in LangChain
-
Output Parsers in LangChain With Examples
-
Unmodifiable or Immutable List in Java
-
Write to a File in Java
-
Interface in Python
-
Tuple in Python With Examples
-
React Virtual DOM
-
Spring MVC Exception Handling - @ExceptionHandler And @ControllerAdvice Example