Skip to main content
Yapay Zeka ve Yazılım

Building AI Agents with LangChain: Practical Guide

Mart 06, 2026 14 dk okuma 29 views Raw
Building AI agents with LangChain - technology infrastructure visual
İçindekiler

Table of Contents

AI agents represent one of the most exciting developments in today's software world. The ability of an LLM (Large Language Model) to not only generate text but also make decisions, use tools, and execute autonomous tasks is fundamentally changing the application development paradigm. LangChain stands out as the most popular and comprehensive framework at the center of this transformation. In this guide, we will explore every aspect of building AI agents with LangChain in depth.

1. What is LangChain?

LangChain is an open-source Python and JavaScript framework designed for building applications powered by large language models (LLMs). Created by Harrison Chase in 2022, LangChain has quickly become an indispensable part of the AI development ecosystem.

The core philosophy of LangChain is that instead of using LLMs in isolation, connecting them to external data sources, APIs, and tools creates far more powerful and functional applications. The framework features a modular architecture and provides developers with composable components that enable building complex workflows with ease.

Info

The LangChain ecosystem now consists of multiple packages: langchain-core (base abstractions), langchain (chain and agent logic), langchain-community (third-party integrations), and langgraph (graph-based orchestration). This modular structure lets you install only the components you need.

Some key reasons behind LangChain's popularity include:

  • Model Agnostic: Works with dozens of LLM providers including OpenAI, Anthropic, Google, and Hugging Face
  • Rich Integrations: Offers over 700 third-party integrations
  • Active Community: 80,000+ GitHub stars and thousands of contributors
  • Comprehensive Documentation: Supported by detailed guides and examples
  • Rapid Prototyping: Enables you to quickly develop complex AI applications

2. Core Components

LangChain's architecture is built on modular, composable components. Each component has a specific responsibility and can be used independently or combined with other components.

Component Description Use Case
Models LLM and Chat Model abstractions Text generation, chat
Prompts Prompt templates and management Dynamic prompt creation
Chains Component chaining Multi-step workflows
Agents Decision-making autonomous agents Dynamic tool selection
Tools Tools used by agents API calls, calculations
Memory Conversation history management Chatbots, context retention
Retrievers Document search and retrieval RAG applications

Installation is straightforward. You can install the base package and the integrations you need via pip:

pip install langchain langchain-openai langchain-community
pip install langgraph langsmith

3. Chains: Sequential Operations

Chains are the fundamental concept that gives LangChain its name. A chain allows you to combine multiple components sequentially or in parallel to create complex workflows. At its simplest, connecting a prompt template to an LLM creates a chain.

Simple Chain Example

from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser

# Define model and prompt
model = ChatOpenAI(model="gpt-4o", temperature=0.7)

prompt = ChatPromptTemplate.from_messages([
    ("system", "You are an experienced software consultant."),
    ("human", "Write a brief summary about {topic}.")
])

# Create chain with LCEL
chain = prompt | model | StrOutputParser()

# Run the chain
result = chain.invoke({"topic": "microservices architecture"})
print(result)

Sequential Chain

Sequential chains are used to model workflows where one step's output becomes the next step's input. For example, you might want to first research a topic and then summarize that research:

from langchain_core.runnables import RunnablePassthrough

# Step 1: Research
research_prompt = ChatPromptTemplate.from_template(
    "Provide a detailed analysis of {topic}."
)

# Step 2: Summary
summary_prompt = ChatPromptTemplate.from_template(
    "Summarize the following analysis in 3 bullet points:\n\n{analysis}"
)

# Chain them together
chain = (
    research_prompt 
    | model 
    | StrOutputParser() 
    | (lambda x: {"analysis": x})
    | summary_prompt 
    | model 
    | StrOutputParser()
)

result = chain.invoke({"topic": "AI ethics"})

4. Agents and Tools

Agents are one of LangChain's most powerful features. While a chain applies predefined steps in sequence, an agent uses the LLM as a reasoning engine to dynamically decide which steps to take and which tools to use. This gives AI applications true autonomy.

Defining Tools

Tools are functions that agents use to interact with the external world. LangChain makes creating tools extremely easy:

from langchain_core.tools import tool
from langchain_openai import ChatOpenAI

@tool
def get_weather(city: str) -> str:
    """Gets the current weather information for the specified city."""
    weather_data = {
        "New York": "72°F, Partly Cloudy",
        "London": "59°F, Rainy",
        "Tokyo": "68°F, Clear"
    }
    return weather_data.get(city, f"No data found for {city}.")

@tool
def calculator(expression: str) -> str:
    """Calculates mathematical expressions. E.g., '2 + 3 * 4'"""
    try:
        result = eval(expression)
        return f"Result: {result}"
    except Exception as e:
        return f"Calculation error: {str(e)}"

@tool
def web_search(query: str) -> str:
    """Searches the web and returns results."""
    # In production, use SerpAPI, Tavily, etc.
    return f"Search results found for '{query}'."

Creating an Agent

from langgraph.prebuilt import create_react_agent

# List tools
tools = [get_weather, calculator, web_search]

# Model
model = ChatOpenAI(model="gpt-4o", temperature=0)

# Create ReAct Agent
agent = create_react_agent(model, tools)

# Run the agent
result = agent.invoke({
    "messages": [
        ("human", "What's the weather in New York and convert it to Celsius?")
    ]
})

for message in result["messages"]:
    print(message.content)

Warning

While agents are powerful, directly using functions like eval() poses a security risk. In production environments, always use input validation and sandboxed environments in your tools. Additionally, integrating LangSmith to monitor agent token consumption is strongly recommended.

5. Memory: Conversation History

LLMs are inherently stateless; each request is processed independently. However, when building a chatbot or assistant, remembering previous conversations is critically important. LangChain's memory system solves this problem through various strategies.

Memory Types

Memory Type How It Works Use Case
ConversationBufferMemory Stores all messages Short conversations
ConversationBufferWindowMemory Stores last N messages Token limit management
ConversationSummaryMemory Keeps summary of conversation Long conversations
VectorStoreMemory Embedding-based search Semantic memory

Modern Memory Management with LangGraph

In the current LangChain ecosystem, LangGraph's checkpointer mechanism is the preferred approach for memory management. This approach automatically saves and restores each conversation state:

from langgraph.prebuilt import create_react_agent
from langgraph.checkpoint.memory import MemorySaver

# Memory manager
checkpointer = MemorySaver()

# Create agent with memory
agent = create_react_agent(
    model, 
    tools, 
    checkpointer=checkpointer
)

# First message
config = {"configurable": {"thread_id": "conversation-1"}}
agent.invoke(
    {"messages": [("human", "My name is Alice.")]},
    config=config
)

# Second message - remembers previous context
agent.invoke(
    {"messages": [("human", "Do you remember my name?")]},
    config=config
)
# Output: "Yes, your name is Alice!"

6. LCEL (LangChain Expression Language)

LCEL is LangChain's declarative chaining language. It uses the pipe (|) operator to connect components together. Every chain created with LCEL automatically has the following capabilities:

  • Streaming: Real-time token-by-token output streaming
  • Async Support: Asynchronous methods like ainvoke, astream
  • Batch Processing: Process multiple inputs in parallel
  • Automatic Retry: Automatic retry on failures
  • Fallback: Define alternative models/chains

Parallel Processing with LCEL

from langchain_core.runnables import RunnableParallel

# Parallel chains
analysis_chain = RunnableParallel(
    summary=ChatPromptTemplate.from_template(
        "Summarize the text: {text}"
    ) | model | StrOutputParser(),
    
    sentiment=ChatPromptTemplate.from_template(
        "Perform sentiment analysis on: {text}"
    ) | model | StrOutputParser(),
    
    keywords=ChatPromptTemplate.from_template(
        "Extract keywords from: {text}"
    ) | model | StrOutputParser()
)

# Three analyses in a single call
results = analysis_chain.invoke({"text": "A long article text..."})
print(results["summary"])
print(results["sentiment"])
print(results["keywords"])

Real-Time Output with Streaming

chain = prompt | model | StrOutputParser()

# Token-by-token streaming
async for chunk in chain.astream({"topic": "Quantum computing"}):
    print(chunk, end="", flush=True)

7. RAG Integration

RAG (Retrieval-Augmented Generation) is one of the most important architectural patterns that enables LLMs to access information beyond their training data. LangChain provides comprehensive tools for building RAG pipelines.

RAG Pipeline Steps

A typical RAG pipeline consists of four fundamental steps:

  1. Document Loading: Loading data from sources such as PDFs, web pages, and databases
  2. Splitting: Dividing documents into small, meaningful chunks
  3. Embedding: Converting text chunks into vector representations
  4. Retrieval + Generation: Finding relevant chunks and presenting them as context to the LLM
from langchain_community.document_loaders import PyPDFLoader
from langchain_text_splitters import RecursiveCharacterTextSplitter
from langchain_openai import OpenAIEmbeddings
from langchain_community.vectorstores import FAISS
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import RunnablePassthrough

# 1. Load documents
loader = PyPDFLoader("company_policies.pdf")
documents = loader.load()

# 2. Split into chunks
text_splitter = RecursiveCharacterTextSplitter(
    chunk_size=1000,
    chunk_overlap=200,
    separators=["\n\n", "\n", ". ", " "]
)
chunks = text_splitter.split_documents(documents)

# 3. Store in vector database
embeddings = OpenAIEmbeddings(model="text-embedding-3-small")
vectorstore = FAISS.from_documents(chunks, embeddings)
retriever = vectorstore.as_retriever(search_kwargs={"k": 4})

# 4. RAG Chain
rag_prompt = ChatPromptTemplate.from_template("""
Answer the question using the following context information.
Do not fabricate information not found in the context.

Context: {context}

Question: {question}

Answer:""")

def format_docs(docs):
    return "\n\n".join(doc.page_content for doc in docs)

rag_chain = (
    {"context": retriever | format_docs, "question": RunnablePassthrough()}
    | rag_prompt
    | model
    | StrOutputParser()
)

# Ask a question
answer = rag_chain.invoke("What is the company's remote work policy?")
print(answer)

Tip

In RAG applications, adjust chunk_size and chunk_overlap values according to your content type. For technical documentation, 1000-1500 characters usually works well, while for chat logs, 500-800 characters gives good results. RecursiveCharacterTextSplitter is the best choice for most scenarios.

8. Debugging with LangSmith

LangSmith is an observability platform developed by the LangChain team. It enables you to trace every step of your LLM applications, debug issues, and evaluate performance. It is essential for managing AI agents running in production environments.

LangSmith Setup

# Set environment variables
import os
os.environ["LANGSMITH_API_KEY"] = "ls__..."
os.environ["LANGSMITH_TRACING"] = "true"
os.environ["LANGSMITH_PROJECT"] = "my-ai-agent-project"

# All LangChain calls are now automatically traced!

What you can do with LangSmith:

  • Trace Viewing: Visually inspect all steps of every chain/agent call
  • Token Usage: Monitor token consumption and cost at each step
  • Latency Analysis: Identify which steps take the longest
  • Error Tracking: Analyze failed calls and error messages
  • Evaluation: Measure output quality with automatic and human evaluation
  • Dataset Management: Create test datasets and run regression tests

Automated Evaluation

from langsmith import Client
from langsmith.evaluation import evaluate

client = Client()

# Create evaluation dataset
dataset = client.create_dataset("rag-test-set")
client.create_examples(
    inputs=[{"question": "What is the remote work policy?"}],
    outputs=[{"answer": "3 days remote work per week..."}],
    dataset_id=dataset.id
)

# Run automated evaluation
results = evaluate(
    rag_chain.invoke,
    data="rag-test-set",
    evaluators=["correctness", "relevance"]
)

9. Multi-Agent Systems with LangGraph

LangGraph is the newest and most powerful component of the LangChain ecosystem. Using a state machine and graph-based approach, it enables you to build complex, multi-step, and multi-agent workflows. With LangGraph, you can model cyclical graphs, conditional branching, and workflows requiring human intervention.

Multi-Agent Architecture

In multi-agent systems, each agent has a specific area of expertise and collaborates with others to solve complex tasks. Common multi-agent patterns include:

  • Supervisor: A central agent distributes tasks to sub-agents
  • Hierarchical: Multi-layered supervisor structure
  • Collaborative: Agents cooperate at equal levels
  • Competitive: Agents solve the same task independently, best result is selected
from langgraph.graph import StateGraph, MessagesState, START, END
from langgraph.prebuilt import create_react_agent
from langchain_core.messages import HumanMessage

# Create specialist agents
researcher = create_react_agent(
    model, 
    tools=[web_search],
    prompt="You are a research expert. Search the web for information."
)

writer = create_react_agent(
    model, 
    tools=[],
    prompt="You are a content writer. Transform research into articles."
)

# Supervisor agent
def supervisor_node(state: MessagesState):
    """Analyzes the task and decides which agent to route to."""
    response = model.invoke([
        ("system", """Analyze the incoming task.
        If research is needed, route to 'researcher'.
        If writing is needed, route to 'writer'.
        If done, say 'FINISH'."""),
        *state["messages"]
    ])
    return {"messages": [response]}

# Build graph
builder = StateGraph(MessagesState)
builder.add_node("supervisor", supervisor_node)
builder.add_node("researcher", researcher)
builder.add_node("writer", writer)

# Define edges
builder.add_edge(START, "supervisor")

def route(state):
    last_msg = state["messages"][-1].content.lower()
    if "researcher" in last_msg:
        return "researcher"
    elif "writer" in last_msg:
        return "writer"
    return END

builder.add_conditional_edges("supervisor", route)
builder.add_edge("researcher", "supervisor")
builder.add_edge("writer", "supervisor")

# Compile and run
graph = builder.compile()
result = graph.invoke({
    "messages": [HumanMessage("Write a blog post about AI trends")]
})

Human-in-the-Loop

LangGraph also supports workflows that require human approval for critical decisions. This feature is especially critical for scenarios such as financial transactions or sensitive data operations:

from langgraph.prebuilt import create_react_agent
from langgraph.checkpoint.memory import MemorySaver

# interrupt_before pauses before specific tools
agent = create_react_agent(
    model,
    tools=[get_weather, calculator],
    checkpointer=MemorySaver(),
    interrupt_before=["calculator"]  # Pause before calling this tool
)

# Agent runs and pauses before calculator
# Continues after user approval

10. Practical Project Examples

Let's explore practical project examples that can be developed with LangChain. These projects serve as excellent starting points for putting what you have learned into practice.

Project 1: Intelligent Customer Support Bot

By combining RAG and agent architecture, you can build a bot that generates responses from your company knowledge base and creates support tickets when needed. This bot automatically classifies customer questions, finds relevant answers from the knowledge base, and escalates unresolved issues to human operators.

Project 2: Code Review Agent

You can create an agent integrated with the GitHub API to automatically review pull requests. The agent detects code quality issues, security vulnerabilities, and performance problems, then provides suggestions and recommendations to developers.

Project 3: Data Analysis Assistant

An assistant that analyzes CSV and database files, creates charts, and prepares reports. LangChain's PandasDataFrameAgent and SQL tools are ideal for these types of projects:

from langchain_community.utilities import SQLDatabase
from langchain_community.agent_toolkits import SQLDatabaseToolkit

# Database connection
db = SQLDatabase.from_uri("sqlite:///sales_data.db")
toolkit = SQLDatabaseToolkit(db=db, llm=model)

# Create SQL Agent
sql_agent = create_react_agent(model, toolkit.get_tools())

result = sql_agent.invoke({
    "messages": [("human", "List the top 5 best-selling products in the last 3 months")]
})

Project 4: Multilingual Content Generation Pipeline

With LangChain, you can build multilingual content generation pipelines. Using LCEL's parallel processing capabilities, you can translate content into multiple languages simultaneously, perform SEO optimization for each language, and automatically publish the results to your CMS.

Best Practices

  • Always add error handling; LLM calls can fail
  • Implement rate limiting; be careful not to exceed your API quotas
  • Use caching; avoid making repeated API calls for the same queries
  • Prefer structured output; validate outputs with Pydantic models
  • Always use LangSmith for monitoring in production

11. Frequently Asked Questions

Is Python required to learn LangChain?

LangChain has both Python and JavaScript/TypeScript versions. The Python version is more mature and has more integrations. Basic Python knowledge (functions, classes, decorators) is sufficient. Asynchronous programming knowledge is helpful but not required for beginners.

Which LLMs can I use with LangChain?

LangChain is compatible with OpenAI (GPT-4o, GPT-4), Anthropic (Claude), Google (Gemini), Meta (Llama), Mistral, Cohere, and many more providers. You can also use local models through tools like Ollama and vLLM. Switching models typically requires only a single-line change.

What is the difference between an Agent and a Chain?

A Chain applies predefined fixed steps in sequence; it follows the same path on every run. An Agent, on the other hand, uses the LLM as a reasoning engine to dynamically decide which steps to take, which tools to use, and when to stop. Agents are more flexible but less predictable.

Can LangChain be used in production?

Yes, many companies use LangChain in production. However, for production deployments, it is recommended to implement LangSmith monitoring, proper error handling, rate limiting, caching, and security measures. You can serve it as an API with LangServe or FastAPI.

When should LangGraph be used?

LangGraph should be used when simple linear chains are insufficient. It is the ideal solution for cyclic workflows, conditional branching, multi-agent coordination, processes requiring human intervention, and scenarios demanding complex state management. Standard chains are adequate for simple query-response applications.

Which vector database is best for RAG applications?

The choice depends on your use case. For prototyping, FAISS or Chroma provide quick starts. In production, managed services like Pinecone, Weaviate, or Qdrant offer scalability and high availability. If you are using PostgreSQL, the pgvector extension is a good option.

Is LangChain expensive to use?

LangChain itself is open-source and free. However, costs depend on the LLM provider you use. Commercial APIs like OpenAI and Anthropic charge per token. To reduce costs, you can use caching, smaller models (like GPT-4o-mini), and local models (Llama via Ollama). LangSmith offers 5,000 traces per month in the free tier.

The LangChain ecosystem continues to evolve rapidly and is becoming the standard in AI agent development. Using the core concepts and practical examples covered in this guide, you can start developing your own AI agent projects today. Build simple workflows with Chains, add dynamic decision-making capabilities with Agents, integrate knowledge bases with RAG, and create complex multi-agent systems with LangGraph to multiply your capabilities.

]]>

Bu yazıyı paylaş