LangChain is an open-source framework which is designed to simplify the creation of LLM powered applications. Through its tools and APIs, it allows you to refine and customize information, making LLM generated responses more precise and contextually meaningful. For example, using LangChain you can link LLMs to external data sources and computation to enhance LLM model’s capabilities. By leveraging these capabilities, developers can build powerful AI applications like smart chatbots, advanced Q&A systems, summarization services.
LangChain Installation
LangChain framework is available as libraries in Python and JavaScript.
- To install the LangChain package in Python you can use the following command
pip install -U langchain
Prerequisite is Python 3.10+ installation for the LangChain version 1.2.13
npm install langchain @langchain/core
Prerequisite is Node.js 20+ installation for the LangChain version 1.2.13
Core components of LangChain
-
Models
LangChain’s standard model interfaces give you access to many different provider integrations. LangChain provides integration to nearly all the LLMs through a standard interface, including OpenAI, Anthropic, Google, to many open source models (like Meta AI’s LLaMa, Deepseek's Deepseek-LLM, Mistral) through HuggingFace, to locally downloaded LLMs through Ollama.
Note that many LLM providers require an API key and you need to create an account in order to receive an API key.
The simplest way to connect to any LLM is using the init_chat_model function in LangChain or you can use the LLM specific model classes like ChatAnthropic, ChatOpenAI, ChatGoogleGenerativeAI, ChatOllama and so on.
Only thing you need is the integration package for the chosen model provider to be installed which is generally in the format langchain-LLM_PROVIDER_NAME. So, if you are using OpenAI then you need to install langchain-openai package.
-
Prompt Template
LangChain provides a PromptTemplate class to create a prompt template- a dynamic prompt with some placeholders that can be given actual values later. There is also a ChatPromptTemplate class which acts as a prompt template for chat models.
-
Tools
LLMs have some limitations, firstly these models are trained up to some cutoff date and they have data up to that time only. Though many LLMs now have web browsing capability to bypass their training data cutoff. Another problem is not have domain data or company specific data.
Langchain tools can give that functionality to agents by letting them fetch real-time data, execute code, query external databases, and take actions in the world. Under the hood, tools are callable functions with well-defined inputs and outputs that get passed to a chat model. The model decides when to invoke a tool based on the conversation context, and what input arguments to provide.
-
Agents
LangChain agents combine LLMs with tools to create systems that can complete complex tasks step by step. Unlike a basic application where you send a prompt and get a response from an LLM, LangChain agents can think, plan, and adapt.
They integrate language models with tools to reason about tasks and choose the right actions.
This allows them to iteratively work toward solutions with greater intelligence and flexibility.
-
Chains
As the name LangChain itself suggests, chains are the main concept of LangChain. They allow developers to link multiple components together into a single workflow. Each link in the chain can handle a specific task, from prompt formatting to retrieval and reasoning. This modular design makes it easy to build complex AI applications step by step.
With the LangChain Expression Language (LCEL), you can define these chains declaratively, using the pipe (|) symbol to connect prompts, models, retrievers, and output parsers.
-
Memory
For many AI applications you may need to retain information about previous interactions. For AI agents, memory is crucial because it lets them remember previous interactions, learn from feedback, and adapt to user preferences. LangChain provides both short term memory and long term memory.
- Short term memory- It lets your application remember previous interactions within a single thread or conversation.
- Long-term memory- It lets your agent store and recall information across different conversations and sessions. Long-term memory persists across threads and can be recalled at any time.
How does LangChain work
Now, when you have some idea about the components of LangChain, let us try to understand how LangChain can actually chain together different components to create a workflow for LLM powered applications.
For example, a typical Retrieval-Augmented Generation (RAG) application may have a workflow as given below-
- Receive the user’s query.
- Reformulate the query by sending it to the LLM which rephrases the user provided query into a concise search query.
- Retrieve the data relevant to search query, which could involve connecting to databases, APIs, or other repositories (e.g. Confluence). LangChain provides various document loaders like PyPDFLoader, TextLoader, CSVLoader, ConfluenceLoader for integrating data from numerous sources.
- Pass the retrieved documents to LLM to summarize the information
- Pass the retrieved information, along with the original query, to an LLM to get the final answer.
That's all for this topic What is LangChain - An Introduction. If you have any doubt or any suggestions to make please drop a comment. Thanks!
Related Topics
You may also like-

No comments:
Post a Comment