// building with ai · 3.5

Why I stopped recommending LangChain
to founders.

Synthwave software workbench showing a simple agent stack beside a heavy framework tower, with code, queue, database, evals, and approval gates.

I stopped recommending LangChain as the default starting point for founders because most first agents do not need a big framework. They need a clear workflow, plain code, a model call, typed tools, a database, logs, evals, and approval gates. LangChain and LangGraph can be useful when the workflow truly needs framework features. But starting there often hides the simple parts founders need to understand.

What is the LangChain decision really about?

LangChain is an AI application framework. It can help with chains, agents, tools, retrieval, and workflow patterns. The issue is not that it is bad. The issue is that many founders reach for it before they understand the workflow, data, permissions, and evals that make an agent useful.

This is not a framework dunk

LangChain has helped a lot of people build AI apps. The ecosystem is large, the docs are active, and LangGraph is a serious tool for stateful agent workflows.

That is exactly why the advice needs nuance. A strong framework can still be the wrong first move for a founder trying to understand a simple agent build.

The default first stack should be understandable. If the founder cannot explain what happens between input, model call, tool call, storage, review, and output, the stack is too abstract too early.

The problem with starting too high

Frameworks can make hard things easier. They can also make simple things harder to inspect. Early agent builds need inspection more than elegance.

When a founder starts with too much abstraction, debugging turns into guesswork. Is the issue the prompt, retrieval, tool schema, agent loop, memory layer, callback, state graph, or framework default? Maybe. But the first agent should not need that many suspects.

Founder needsHeavy framework riskLighter default
Understand the workflow.The framework hides control flow.Plain function calls and explicit steps.
Debug bad output.Many abstraction layers blur the failure.Log prompt, context, tools, and response.
Keep scope narrow.Easy to add agents, chains, memory, and tools too soon.One job, one model call, one review loop.
Control permissions.Tool wrappers can look safer than they are.Typed tools with explicit read and write boundaries.

The stack I recommend first

For most first agents, start with less. Use your normal app language, the model provider SDK, a database table, a queue if the job is async, and a small set of typed tools.

The core loop is not complicated: receive input, fetch context, build the prompt, call the model, validate the output, save the draft, route to a human, log corrections.

That loop teaches the team what matters. It also keeps the first version easy to replace later if a framework earns its place.

The light-stack rule

Do not add an agent framework until you can name the specific problem it solves better than plain code.

Where LangChain still makes sense

There are real cases where LangChain or LangGraph can be the right choice. If you need stateful workflows, graph-based control flow, durable execution, human review nodes, retries, and a larger ecosystem, a framework can save time.

The key is choosing it for a concrete need, not because "agent framework" sounds like the adult option. If your first workflow is one support draft with two read tools, plain code is usually enough.

What replaced most abstractions for me

Three patterns replaced most of the early framework weight: typed tools, explicit state, and evals.

Typed tools make inputs and outputs clear. The agent cannot call a vague function and hope. It calls a narrow function with a schema a human can read.

Explicit state means the workflow status lives in a database, ticket, document, or queue. The system knows whether a draft is pending, approved, rejected, sent, failed, or escalated.

Evals give you a correction loop. You do not need a huge test platform on day one. You need real examples and a habit of measuring what the agent gets wrong.

The founder version of an agent architecture

If you are a founder, the architecture should be something you can inspect without becoming an AI framework specialist.

  1. One workflow table or queue.
  2. One context builder that pulls trusted sources.
  3. One prompt builder with examples and stop rules.
  4. One model call.
  5. One validator for shape and risky claims.
  6. One human approval surface.
  7. One log of inputs, sources, output, and corrections.

That stack will not impress framework people. It will ship useful work.

When to graduate to a framework

Graduate when the plain-code version has real pressure. Maybe the workflow needs branching state. Maybe many agents need shared tools. Maybe you need durable retries, tracing, or a graph with review nodes.

At that point, you will know what you are buying. You are not buying "AI architecture." You are buying a solution to a real operational problem you have already felt.

The decision test

Before choosing LangChain, LangGraph, a different framework, or plain code, answer these questions:

If the last answer is vague, start lighter.

The fair counterargument

The fair counterargument is that frameworks save time once the workflow has real complexity. LangChain's current docs describe a prebuilt agent architecture and integrations across models and tools. LangGraph focuses on durable execution, streaming, human review, and agent orchestration.

Those are real capabilities. If you are building a stateful workflow with many branches and review points, it may be wasteful to rebuild all of that yourself.

The point is sequencing. Understand the workflow first. Add the framework when it solves a problem you can name.

The cost of abstraction

Every abstraction has a bill. Sometimes the bill is worth paying. Sometimes it shows up as slower debugging, harder onboarding, brittle upgrades, and engineers arguing about framework patterns instead of the customer workflow.

Founders should be especially careful because early systems change weekly. The first agent may need different sources, a different model, a different reviewer, or a different output format after three days of real use.

A small plain-code system bends faster. Once the workflow stabilizes, heavier structure becomes easier to justify.

A migration path that keeps options open

The safest path is not anti-framework. It is framework-later. Build the first version with clean boundaries: context builder, tool layer, model call, validator, state store, human review, and logs.

If you later move into LangGraph or another workflow framework, those boundaries map cleanly into nodes, tools, state, interrupts, and traces. You are not throwing work away. You are moving a known system into a stronger runtime.

If you start with tangled framework code before the workflow is known, migration goes the other direction: you spend time removing cleverness to understand what the agent actually does.

What I still borrow from frameworks

Even when I start with plain code, frameworks have taught useful patterns. Tool schemas matter. Tracing matters. Explicit state matters. Human review nodes matter. Streaming can improve user trust. Durable runs can save you from half-finished workflows.

The lesson is not "avoid all framework ideas." The lesson is "do not import the whole framework before the problem needs it."

Good architecture ideas should survive outside the package they came from. If a pattern is useful, you can copy the pattern first and adopt the library later.

Founder anti-patterns

The first anti-pattern is building a framework demo instead of a business workflow. The demo shows an agent calling tools. The business still has no clear owner, reviewer, or approval path.

The second anti-pattern is adding memory before adding truth. Persistent memory is not helpful if it stores vague notes, stale assumptions, or unreviewed model output.

The third anti-pattern is confusing agent loops with value. An agent that thinks for six steps and returns a mediocre draft is worse than one direct model call that returns a clean useful answer.

A simple selection rule

Choose the boring stack until the boring stack hurts. When plain code cannot handle branching state, durable retries, shared tracing, or complex human review, then reach for the right framework.

That rule keeps the founder focused on business constraints first. What is the workflow? Who reviews it? Which tools are safe? What happens when the model is wrong?

The framework decision should come after those answers, not before them.

What I tell founders now

Start with the job, not the framework. If the job is support drafting, build the support drafting loop. If the job is finance briefing, build the finance briefing loop. The first build should teach you how the business workflow behaves with AI in it.

After that, choose tools with evidence. If plain code is getting messy because the workflow has durable state and many review branches, bring in a workflow framework. If the pain is tool discovery across several clients, use MCP. If the pain is bad output, fix context and evals before changing frameworks.

The framework is never the strategy. The operating loop is the strategy.

How to turn this into a project brief

If this topic is moving from article to build, write the project brief before picking tools. The brief should fit on one page. If it cannot, the scope is probably still too wide.

Use five fields: workflow, owner, sources, allowed actions, and proof. The workflow names the repeat job. The owner names the human reviewer. The sources name the systems and documents the agent may trust. The allowed actions name what the agent can read, draft, update, or never touch. The proof names the metric that decides whether the build worked.

This keeps the build tied to business work. Agents fail when they become an abstract technology project. They work when the job, reviewer, sources, permissions, and proof are clear before code starts.

Frequently asked questions

Is LangChain bad for AI agents?

No. LangChain can be useful. The issue is defaulting to it too early before the workflow, state, tools, permissions, and evals are clear.

What should founders use instead of LangChain first?

Most founders should start with plain code, a model provider SDK, typed tools, a database or queue, logs, evals, and human approval gates.

When should I use LangGraph?

Use LangGraph or a similar workflow framework when you need stateful agent flows, branching, durable execution, review nodes, retries, or more formal orchestration.

Can I switch to LangChain later?

Yes. A clean plain-code prototype with typed tools, explicit state, and good logs is easier to move into a framework later than a tangled framework prototype is to simplify.

What is the main risk of agent frameworks?

The main risk is hiding the basic workflow behind abstractions before the team understands the data, permissions, failure modes, and human review loop.

Key takeaways

Related reading

Want the simplest stack that still works?

The intake shows us your workflow, tools, risk, and review needs. Then we pick the lightest architecture that can do the job without hiding the moving parts.

Start the intake →