In this tutorial we’ll try to write few programs to connect to different LLMs using LangChain. In the post What is LangChain - An Introduction one of the points discussed was the standard interface provided by LangChain to integrate to any LLM.
Options to connect to the LLM using LangChain
There are two options to connect to the LLM using LangChain-
- Using the init_chat_model function
- Using the LLM specific model classes like ChatAnthropic, ChatOpenAI, ChatGoogleGenerativeAI, ChatOllama and so on.
Langchain init_chat_model function example
For the examples I am going to use OpenAI’s Gpt model, Google’s Gemini model and qwen3-32b through Groq inference provider. For another example, I’ll also use "llama3.1" through Ollama.
In the examples user's query is send to the model which responds with the answer to that query.
-
Packages needed are-
- python-dotenv
- langchain
- langchain-openai
- langchain-google-genai
- langchain-groq
- langchain-ollama
You can install them individually using
pip install PACKAGE_NAMEor, if you are creating a Python program then you can create a requirements.txt file and put all the above mentioned external package dependencies in that file and provide that file to pip install commamd.pip install -r requirements.txt
-
Getting and setting the API key
For using OpenAI models, Gemini models and Groq you must first obtain API key from the respective API provider. You can create an .env file in your Python project and store the generated API keys there.
- GEMINI_API_KEY = “YOUR_GOOGLE_GEMINI_KEY”
- GROQ_API_KEY = “YOUR_GROQ_API_KEY”
- OPENAI_API_KEY = “YOUR_ OPENAI_API_KEY”
This .env file can then be loaded using
load_dotenv()function.
1. Connecting to Gemini
You can initialize the model by specifying the model name and optionally, the model_provider in init_chat_model() function. The temperature parameter is used to control the randomness, creativity, and determinism of the model's output.
-
Low Temperature (e.g., 0.0 to 0.3)
Makes the model deterministic and focused. You'll get a "to the point" answer.
-
High Temperature (e.g., 0.7 to 1.0+)
Makes the model creative and more diverse.
from langchain.chat_models import init_chat_model
from dotenv import load_dotenv
load_dotenv()
model = init_chat_model(
model="google_genai:gemini-3.1-flash-lite-preview",
temperature=0.3
)
response = model.invoke("Explain Agentic AI in 5 lines")
print(response)
Output
content=[{'type': 'text', 'text': 'Agentic AI refers to autonomous systems capable of setting their own goals, breaking them
into tasks, and executing them with minimal human intervention. Unlike traditional AI that simply responds to prompts, these
agents use reasoning and tools to navigate complex environments. They actively monitor progress, adapt their strategies in
real-time, and make decisions to achieve a desired outcome. Essentially, they shift the paradigm from "AI as a tool" to
"AI as a collaborative partner" that gets work done.', 'extras': {'signature':
'EjQKMgG+Pvb7ue1hvNYKCnERjPRv7v99o5JJsdZbRGGB3ce3fntMxKjz0D2dXa5GBv3l5myR'}}] additional_kwargs={} response_metadata=
{'finish_reason': 'STOP', 'model_name': 'gemini-3.1-flash-lite-preview', 'safety_ratings': [], 'model_provider': 'google_genai'}
id='lc_run--019d2edb-3426-7033-a08d-d6d29fc97de5-0' tool_calls=[] invalid_tool_calls=[] usage_metadata={'input_tokens': 9,
'output_tokens': 95, 'total_tokens': 104, 'input_token_details': {'cache_read': 0}}
As you can see response contains lot of other information along with actual content. You can extract the content part using response.content.
2. Connecting to OpenAI
from langchain.chat_models import init_chat_model
from dotenv import load_dotenv
load_dotenv()
model = init_chat_model(
model="gpt-5.2",
temperature=0.3
)
response = model.invoke("Explain Agentic AI in 5 lines")
print(response.content)
3. Using qwen3-32b model through Groq
Since Groq is the model provider here so you need to explicitly mention it using model_provider parameter.
from langchain.chat_models import init_chat_model
from dotenv import load_dotenv
load_dotenv()
model = init_chat_model(
model="qwen/qwen3-32b",
model_provider="groq",
temperature=0.3
)
response = model.invoke("Explain Agentic AI in 5 lines")
print(response.content)
Configuration with init_chat_model
Above example used the fixed model initialization but you can also configure models at runtime using init_chat_model. That makes it easy to switch providers without changing code.
You need to set the following parameters for that-
- configurable_fields: Defines which fields can be changed at runtime (e.g., 'any' for all fields, or a list like ("model", "temperature")).
- config_prefix: If set, allows runtime configuration via config["configurable"]["{prefix}_{param}"]
in the following code initially "gpt-5.2" is selected as the model but later using the config_prefix "llama3.1" is set as the model.
from langchain.chat_models import init_chat_model
from dotenv import load_dotenv
load_dotenv()
configurable_model = init_chat_model(
model="gpt-5.2",
temperature=0.3,
configurable_fields="any", # Allows all fields to be configurable
config_prefix="my_config" # Prefix for environment variables to override defaults
)
response = configurable_model.invoke("What is the role of GPU in the rise of AI?",
config={
"configurable": { "my_config_temperature": 0.7, # Override temperature for this invocation
"my_config_model": "llama3.1", # Override model for this invocation
"my_config_model_provider": "ollama" # Override model provider for this invocation
}
})
print(response.content)
You’ll get the output through Ollama not through GPT, because of the configuration settings.
Using Chat model classes in LangChain
LangChain provides chat model classes too for integrating with various models, enabling developers to build intelligent conversational AI applications with seamless support for OpenAI, Anthropic, Hugging Face, and other large language models.
These classes wrap various model providers, allowing developers to switch between them with minimal code changes.
Core classes for chat models are usually prefixed with Chat and imported from their integration packages, such as langchain_openai and langchain_anthropic. For example,
- ChatOpenAI: For OpenAI models.
- ChatAnthropic: For Anthropic models.
Examples using Chat model classes
- Using qwen3-32b model through ChatGroq
- Using ChatGoogleGenerativeAI to connect to Gemini
from langchain_groq import ChatGroq
from dotenv import load_dotenv
load_dotenv()
model = ChatGroq(
model="qwen/qwen3-32b",
temperature=0.3
)
response = model.invoke("What is the role of GPU in deep learning, explain in 5 lines?")
print(response.content)
from langchain_google_genai import ChatGoogleGenerativeAI
from dotenv import load_dotenv
load_dotenv()
model = ChatGoogleGenerativeAI(
model="gemini-3.1-flash-lite-preview",
temperature=0.3
)
response = model.invoke("What is the role of GPU in deep learning, explain in 5 lines?")
print(response.content)
That's all for this topic Using qwen3-32b model through Groq. If you have any doubt or any suggestions to make please drop a comment. Thanks!
That's all for this topic First LangChain Program: Ask Me Anything. If you have any doubt or any suggestions to make please drop a comment. Thanks!
Related Topics
You may also like-

No comments:
Post a Comment