LangChain Academy: Learn the basics of LangGraph in our free, structured course. Deploy LangGraph Agents to Kubernetes

What is LangGraph?

LangGraph is a Python (and JS) library in the LangChain ecosystem for building stateful, long-running agent/workflow applications as an explicit directed graph.

What that means in practice

  • You model your application as nodes (units of work like an LLM call, tool call, parser, validator) connected by edges (control-flow).
  • It supports branching, loops, and conditional routing cleanly (so you can build “agent control flow” beyond linear chains).
  • It has a well-defined execution model: nodes scheduled together run in parallel within a “super-step”, and the system synchronizes between super-steps.
  • It supports checkpointing/persistence so you can pause/resume, recover after crashes, and maintain conversation/workflow state keyed by a thread_id.

If LangChain “chains” feel like pipelines, LangGraph is the control-flow engine you use when your agent needs real orchestration: retries, branches, parallel tool/model calls, memory, and resumability.

The core idea: “a program is a graph + state”

LangGraph models an agent workflow as a graph driven by a shared state:

  • State: the shared data structure (a “snapshot” of your app) that flows through the graph.
  • Nodes: functions that do work (LLM call, tool call, parsing, logging, etc.) and return updates to state.
  • Edges: define what runs next (fixed transitions or conditional routing).

Execution proceeds in discrete “super-steps” (inspired by Pregel-style message passing). Nodes scheduled in the same super-step run in parallel; nodes in different super-steps run sequentially.

Nodes

A node is just a Python function:

  • Input: current state
  • Output: a partial update (you usually return only the keys you changed, not the whole state)

Conceptually:

def llama_node(state):
  # read what you need from state
  # do work (LLM/tool/logic)
  # return updates
  return {"llama_reply": "..."}

Edges (normal edges)

A normal edge means “always go from A → B”. You add them with add_edge(“A”, “B”). You also wire the entry using the virtual START node and finish at END. This is your straight-line workflow.

Conditional edges + routers

A conditional edge means “after node A finishes, call a routing function to decide where to go next.” In LangGraph docs, the “router” is literally the routing function passed to add_conditional_edges.

What a router returns

After node A runs, LangGraph calls: routing_function(state) …and it can return:

  1. A node name (string) — go to exactly that node next.
  2. A list of node names — go to all of them next (parallel fan-out).
  3. A value that’s mapped via a dict, e.g. {True: “B”, False: “C”}.

Router example

def route_model(state):
    text = state["user_input"]
    if text.startswith("Hey Qwen"):
        return "qwen_node"
    return "llama_node"

Then: add_conditional_edges(“get_user_input”, route_model)

Parallelism (fan-out / fan-in)

LangGraph parallelism is primarily driven by multiple outgoing edges: If a node has multiple outgoing edges, all destination nodes execute in parallel in the next super-step.

Two common patterns

  • Fan-out to multiple models

    get_user_input routes to both llama_node and qwen_node.

  • Fan-in (join)

    A downstream node waits for both to finish (practically: both write results into state, then the join node reads both and prints).

    Important practical note: when parallel nodes update the same state key, you need a safe merge strategy (“reducers” in LangGraph). The Graph API doc emphasizes that state updates are applied via reducer functions per key.

  • Dynamic fan-out with Send (optional but useful)

    If you don’t know ahead of time how many parallel tasks you’ll spawn (e.g., map-reduce), conditional edges can return Send(node_name, per_task_state) objects.

Checkpoints (persistence + crash recovery)

A checkpointer makes the graph durable: LangGraph saves progress so you can resume after a crash.

Reference List

  1. https://docs.langchain.com/oss/python/langgraph/graph-api