Tuesday, March 31, 2026

Messages in LangChain

In this tutorial we’ll see what are messages in LangChain and what are the different message types available in LangChain.

What are Messages in LangChain?

When you interact with LLMs programmatically using LangChain every input and output has an associated message type, structure and metadata. Knowing these message types will help you in structuring your prompt, setting context for each message in a better way which in turn will get you a better response from LLM.

Attributes of a Message in LangChain

Each message is an object which has following attributes-

  • Role- Which identifies the type of the message (System, Human etc.)
  • Content- Actual content of the message (prompt by user, response from model).
  • Metadata- Optional data like token usage, message ID etc.

Types of Messages in LangChain

List of the different message classes in LangChain.

  1. SystemMessage- Used to set how model should behave and for setting the context for interaction.
  2. HumanMessage- Represents the user input.
  3. AIMessage- Represents the response generated by the model.
  4. ToolsMessage- Represents the output of the tool calls.
  5. ChatMessage- Messages that can be assigned an arbitrary role other than the predefined ones.
  6. FunctionMessage- Message for passing the result of executing a tool back to a model. This is a legacy class succeeded by ToolsMessage.

SystemMessage Class

SystemMessage class is used to provide high-level instructions that guide model's behavior, tone, or style.

from langchain_core.messages import SystemMessage
SystemMessage(content="You are an experience Python programmer")
  

HumanMessage Class

A HumanMessage represents user input and interactions. They can contain text, images, audio, files, and any other amount of multimodal content.

from langchain_core.messages import HumanMessage
HumanMessage (content="Write binary search program in Python")
  

AIMessage Class

This is the response from the model. They can include multimodal data, tool calls, and provider-specific metadata that you can later access.

Here is an example how a message list may look like with system, human and ai messages.

from langchain.messages import AIMessage, SystemMessage, HumanMessage

# Add to conversation history
messages = [
SystemMessage("You are a helpful assistant"),
HumanMessage("Can you help me?"),
# Create an AI message manually (for conversation history purpose)
AIMessage("I'd be happy to help you with that question!"), # Insert as if it came from the model
HumanMessage("Great! What's agentic AI?")
]
  

ToolMessage Class

When models make tool calls, they’re included as ToolMessage in the returned AIMessage.

Suppose there is a tool call to extract some information from the message and this tool is bound to the model.

@tool
def extract_info(message:str) -> str:
	…
	…

Then the model knows it has to call this function for extracting information and that is also conveyed in the AIMessage which includes a ToolsMessage.

AIMessage(content='', additional_kwargs={}, response_metadata={'model': 'llama3.1', 'created_at': '2026-03-30T06:33:19.9566618Z', 
'done': True, 'done_reason': 'stop', 'total_duration': 75089402400, 'load_duration': 184685700, 'prompt_eval_count': 755, 
'prompt_eval_duration': 68079985500, 'eval_count': 42, 'eval_duration': 6618086400, 'logprobs': None, 'model_name': 'llama3.1', 
'model_provider': 'ollama'}, id='lc_run--019d3d71-3dfe-7b11-b5c8-8bedf1f1870e-0', 
tool_calls=[{'name': 'extract_info', 'args': {'message': 'Extract relevant information from this message, Name: Test, skills: Java, Python, Spring Boot'}, 'id': 'ccd14d41-41ca-47fc-b89e-68dfcdb0fae2', 'type': 'tool_call'}], invalid_tool_calls=[], usage_metadata={'input_tokens': 755, 'output_tokens': 42, 'total_tokens': 797}), ToolMessage(content='Extracted information successfully. Let me check if I have all the details.', name='extract_info', id='93988e12-e267-409e-8efe-7494193d6451', tool_call_id='ccd14d41-41ca-47fc-b89e-68dfcdb0fae2')

ChatMessage Class

The ChatMessage class in LangChain is a flexible message type that allows you to specify an arbitrary role for the speaker.

messages = [
    SystemMessage(content="You are a helpful assistant."),
    HumanMessage(content="What is agentic AI?"),
    AIMessage(content=" Agentic AI is artificial intelligence that can autonomously perceive, reason, and act toward achieving goals without constant human intervention."),
    ChatMessage(role="developer", content="Ensure your next response is very technical."),
    HumanMessage(content="Explain the role of agentic AI."),
]
  

That's all for this topic Messages in LangChain. If you have any doubt or any suggestions to make please drop a comment. Thanks!


Related Topics

  1. What is LangChain - An Introduction
  2. First LangChain Program: Ask Me Anything
  3. Prompt Templates in LangChain With Examples
  4. LangChain PromptTemplate + Streamlit - Code Generator Example
  5. Object Creation Using new Operator in Java

You may also like-

  1. String in Java Tutorial
  2. Array in Java
  3. Count Number of Words in a String Java Program
  4. Ternary Operator in Java With Examples
  5. Java Multithreading Interview Questions And Answers
  6. Java Exception Handling Tutorial
  7. ConcurrentHashMap in Java With Examples
  8. TreeMap in Java With Examples

No comments:

Post a Comment