LangChain in Real-World Apps
How to practically implement LangChain to build intelligent, agent-powered applications.
LangChain Guide for JavaScript/TypeScript
LangChain is a powerful framework for building applications powered by large language models. It lets you compose context-aware, multi-step “chains” of prompts, LLM calls, tools, and memory to create smart apps. The diagram below illustrates the LangChain stack: at the core is a Protocol (the LCEL), wrapped by the LangChain Core (Chains/Agents/Prompts), extended by integrations (models, tools, vector stores), and supported by tools like LangSmith and LangServe.
Figure: LangChain’s architecture stack – core protocols, components, and integrations (from LangChain docs).
LangChain makes it easy to connect prompts to LLMs, add tools and memory, and deploy the whole thing as an API. It’s written in TypeScript, so it fits naturally into modern JS/TS stacks.
Installation & Setup
To start, install LangChain via npm (or yarn/pnpm). The basic package is langchain, which includes core abstractions:
1npm install langchain @langchain/coreThen import the modules you need. For example, to use OpenAI models and prompts:
1import { ChatOpenAI } from "@langchain/openai";
2import { ChatPromptTemplate, HumanMessagePromptTemplate, SystemMessagePromptTemplate } from "@langchain/core/prompts";
3
4const model = new ChatOpenAI({ temperature: 0.7 }); // an LLM instance
5const prompt = ChatPromptTemplate.fromPromptMessages([
6 SystemMessagePromptTemplate.fromTemplate("You are a helpful assistant."),
7 HumanMessagePromptTemplate.fromTemplate("What is the capital of France?")
8]);
9
10// Chain the prompt to the model
11const response = await (prompt.pipe(model)).invoke({});
12console.log(response.text); // Outputs: "Paris"Above, we set up a chat model (ChatOpenAI), created a prompt template, and piped it into the model. The invoke call runs the chain. (No actual metal chains involved – but it is a chain of operations, pun intended 😜!)
You may need API keys (e.g. OPENAI_API_KEY) as environment variables. For other providers like Anthropic, Vertex, or Cohere, just install the corresponding @langchain/xxxx package and initialize it similarly.
Core Components
LangChain’s core modules include Prompts, LLMs, Chains, Agents, and Memory. Let’s look at each with simple examples:
-
Prompts
Prompt templates help format instructions to the LLM. For example, a chat prompt template can combine system and user messages:
typescript1import { ChatPromptTemplate, HumanMessagePromptTemplate, SystemMessagePromptTemplate } from "@langchain/core/prompts"; 2 3const prompt = ChatPromptTemplate.fromPromptMessages([ 4 SystemMessagePromptTemplate.fromTemplate("You are a friendly assistant."), 5 HumanMessagePromptTemplate.fromTemplate("Tell me a joke about cats.") 6]);Here
{template}strings and placeholders can be used. Prompts keep your instructions modular and reusable. They’re usually piped into a model (e.g.prompt.pipe(model)). -
LLMs
LangChain provides interfaces to many LLM providers. For instance, using OpenAI’s GPT models:
typescript1import { ChatOpenAI } from "@langchain/openai"; 2 3const model = new ChatOpenAI({ model: "gpt-4", temperature: 0.5 }); 4const res = await model.call("Say something nice!"); 5console.log(res.text);You can also use other models (Anthropic Claude, Azure OpenAI, VertexAI, etc.) with similar classes. The models take your prompts and produce text (or structured output).
-
Chains
A Chain sequences operations. The simplest is
LLMChain, which pairs a prompt with an LLM. For example:typescript1import { ChatPromptTemplate } from "@langchain/core/prompts"; 2import { ChatOpenAI } from "@langchain/openai"; 3 4const prompt = ChatPromptTemplate.fromTemplate("Make up a {adjective} joke."); 5const llm = new ChatOpenAI(); 6const chain = prompt.pipe(llm); 7const result = await chain.invoke({ adjective: "spooky" }); 8console.log(result.text); // e.g. "Why did the ... 🦇"You can also build more complex chains (retrieval-augmented, QA chains, etc.). LangChain even has helper classes like
ConversationChainfor chatbots. The pipeline flow is: Prepare inputs → Run LLM → Parse output. -
Agents
Agents let the model call tools automatically. Think of an agent as a reasoning loop: it queries the LLM for what to do, executes a tool, then loops, until a final answer. For example, you might give it tools like a calculator or web search. Under the hood, it follows a flow like:
Figure: Tool-calling flow in a LangChain agent: user query → model suggests a tool call → tool executes → model returns final answer.
typescript1// Pseudocode example (actual API may vary): 2import { initializeAgent } from "@langchain/agents"; 3const tools = [wikiTool, calcTool]; // predefined tools 4const agent = await initializeAgent(model, tools); 5const answer = await agent.call({ input: "What's 13 factorial?" }); 6console.log(answer.text);The agent consults the LLM to decide which tool(s) to use. This is great for complex tasks, but adds complexity and potential unpredictability. (If it calls the wrong tool or goes in loops, debugging can be 🐌 slow. LangSmith helps to trace these steps.)
-
Memory
Memory lets your app remember past interactions or facts. LangChain has memory classes like
BufferMemoryandSummaryMemory. For example,BufferMemoryaccumulates chat history:typescript1import { ChatOpenAI } from "@langchain/openai"; 2import { BufferMemory } from "@langchain/memory"; 3import { ConversationChain } from "@langchain/chains"; 4 5const model = new ChatOpenAI({}); 6const memory = new BufferMemory(); 7const conversation = new ConversationChain({ llm: model, memory }); 8 9const res1 = await conversation.call({ input: "Hi! I'm Jim." }); 10console.log(res1.response); // e.g. "Hi Jim!" 11const res2 = await conversation.call({ input: "What's my name?" }); 12console.log(res2.response); // Learns and says "You said your name is Jim."In this snippet, the assistant remembers you introduced yourself. Memory modules are great for chatbots and multi-turn dialogues. You can also use vector-memory (contextual info in a vector DB) or summary memory for long chats.
These examples are just the tip of the iceberg. LangChain under the hood uses LCEL (LangChain Expression Language) to pipe data through these components (notice how we used prompt.pipe(llm)). You can also .stream() outputs for incremental responses.
Under the Hood
LangChain abstracts the flow of data. At its core, it’s a directed graph of “runnables” (prompts, models, memory, etc.) that process a RunnerState. When you do pipeline.invoke(), LangChain moves data through that graph. For example, in LLMChain, the input variables are filled into the prompt template, sent to the model (ChatOpenAI), and the text output is returned.
If you use the new LangChain Expression Language (LCEL), you can even stream responses. For instance, const stream = llmChain.stream(inputs) produces an async iterator of text tokens.
Behind the scenes, LangChain handles a lot of glue code: it formats prompts as messages for chat models, parses JSON if needed, splits prompts for long text, and more. It also manages things like tokenization limits and can interface with LangSmith for tracking. The key idea: composition over configuration. You compose building blocks in code (or declaratively via LCEL) to define your application logic.
When to Use What
- Simple LLM calls: Use a Prompt + LLM (e.g.
ChatOpenAI) when you just need to ask the model something directly. - Structured workflows: Use Chains when you have a fixed sequence (e.g., "search for info" + "summarize").
- Tool-using workflows: Use Agents when the model should decide what tools (APIs, calculators, etc.) to invoke dynamically.
- Chatbots: Use
ConversationChainor a custom chain withBufferMemoryto hold history. - Retrieval/Q&A: Use a vector store + retriever to fetch relevant docs, then chain with LLM for answer.
- Embeddings & Vectors: When you need similarity search or semantic retrieval, use the Embeddings + VectorStore components (covered below).
Remember: LangChain is an opinionated framework, so it shines when your task fits its paradigm (LLMs + modular components). If you need total custom logic with micro-optimizations, it might feel heavy. Think of it like a smart pipeline library for AI – sometimes it abstracts too much, but often it saves boilerplate.
Pros & Cons
Advantages:
- Modular & Composable: Easy to mix prompts, LLMs, tools, and memory blocks. Great for RAG, chatbots, and multi-step flows.
- TypeScript Support: Written in TS with type definitions. You get compiler checks (though see cons) and can integrate into Node or Deno apps smoothly.
- Ecosystem & Integrations: Tons of built-in integrations (OpenAI, Azure, Pinecone, S3, etc.) and community extensions.
- Cross-Environment: Runs on Node.js, Vercel/Next, Cloudflare, and even browser settings.
- Active Development: The JS docs, examples, and community are growing. Constantly evolving with new features.
Disadvantages:
- Documentation Lag: As one user put it, the JS/TS version often lags behind the Python one in features and docs. You might need to dig into source code or wait for updates.
- Type Safety Woes: Ironically, despite TS, LangChain’s abstractions (dynamic chains, prompts, etc.) can lead to runtime errors if you miss a variable. The TS types aren’t foolproof for every chain step.
- Overhead: If your use case is simple, LangChain may feel heavyweight (lots of classes and pipelining). For tiny scripts, raw API calls might be easier.
- LLM Limitations: Any model framework inherits LLM quirks (non-determinism, hallucinations, tokens). Chains and agents can sometimes go off-track or need careful prompting.
In short: LangChain speeds up complex AI apps, but adds its own learning curve. It’s fantastic for prototype-to-prod pipelines, but overkill for one-off requests.
Integrating with Vercel AI SDK
If you’re building on Next.js or Vercel, there’s a handy LangChainAdapter in Vercel’s AI SDK. LangChain by itself doesn’t handle streaming to a React client, so Vercel provides an adapter to bridge that gap. For example:
1// app/api/chat/route.ts
2import { ChatOpenAI } from "@langchain/openai";
3import { LangChainAdapter } from "ai";
4
5export const runtime = "edge";
6export async function POST(req: Request) {
7 const { prompt } = await req.json();
8 const model = new ChatOpenAI({ model: "gpt-3.5-turbo-0613" });
9 const stream = await model.stream(prompt); // LCEL stream or .stream() on chain
10 return LangChainAdapter.toDataStreamResponse(stream);
11}Then in your React component, use Vercel’s useCompletion or useChat hook:
1'use client';
2import { useCompletion } from '@ai-sdk/react';
3
4export default function ChatPage() {
5 const { completion, input, handleInputChange, handleSubmit } = useCompletion({
6 api: '/api/chat/route'
7 });
8 // ...
9}This wires up the LangChain model to the UI. As of 2025, the Vercel docs for LangChain are still evolving, so be sure to check the Vercel AI SDK docs frequently. (They even support streaming from LCEL .stream() responses.) In short, use LangChainAdapter and Vercel hooks to seamlessly integrate LangChain chains with your frontend.
Vector Embeddings & Retrieval
Embeddings are numeric vector representations of text. They let you compare semantic similarity (e.g., “cat” vs “kitten”). In LangChain, you generate embeddings with an Embeddings class (OpenAI, Cohere, Mistral, etc.). Then you store them in a VectorStore for similarity search.
For example, using OpenAI embeddings and an in-memory store:
1import { OpenAIEmbeddings } from "@langchain/openai";
2import { MemoryVectorStore } from "@langchain/vectorstores/memory";
3
4const embeddings = new OpenAIEmbeddings({ model: "text-embedding-3-large" });
5const vectorStore = new MemoryVectorStore(embeddings);
6
7// Add text documents (as LangChain `Document` objects) to the store:
8await vectorStore.addDocuments([
9 { id: "doc1", text: "LangChain helps build AI apps." },
10 { id: "doc2", text: "FastAPI is great for Python APIs." }
11]);
12
13// Later, search for relevant docs:
14const results = await vectorStore.similaritySearch("How do I build a chatbot?");
15console.log(results[0].pageContent);Here MemoryVectorStore holds vectors in RAM. In practice, you might use persistent vector DBs like Chroma, Pinecone, Qdrant, MongoDB Atlas Vector Search, etc. (LangChain has adapters for all of these.)
Vector retrieval is key for RAG (Retrieval-Augmented Generation): you embed a query, find nearest documents, and feed them into your LLM prompt.
Using AstraDB for Vector Storage
For a more robust solution, you can store embeddings in AstraDB (DataStax’s serverless DB) using LangChain’s AstraDBVectorStore (part of @langchain/community). First, configure your Astra credentials:
1import { AstraDBVectorStore } from "@langchain/community/vectorstores/astradb";
2import { OpenAIEmbeddings } from "@langchain/openai";
3
4const astraConfig = {
5 token: process.env.ASTRA_DB_APPLICATION_TOKEN!,
6 endpoint: process.env.ASTRA_DB_API_ENDPOINT!,
7 namespace: process.env.ASTRA_DB_KEYSPACE!,
8 collection: process.env.ASTRA_DB_COLLECTION || "docs",
9 collectionOptions: { vector: { dimension: 1536, metric: "cosine" } }
10};
11
12const vectorStore = await AstraDBVectorStore.fromDocuments(
13 documents,
14 new OpenAIEmbeddings({ openAIApiKey: process.env.OPENAI_API_KEY }),
15 astraConfig
16);
17
18// Embeddings are computed and stored in AstraDB:
19await vectorStore.addDocuments(documents);Now your vectors live in AstraDB. You can later query them (vectorStore.similaritySearch(...)) and LangChain will retrieve matching docs from the database. (Tip: Make sure your dimension and metric match the embedding model.)
Using AstraDB or another managed store ensures persistence and scalability for large datasets. It’s an excellent way to power RAG in production.
TL;DR
- LangChain is a JS/TS framework for LLM-powered apps (chatbots, RAG, etc.). It’s installed via npm (
langchainplus provider packages) and written in TypeScript. - Key concepts: Prompts (templates for LLM inputs), LLMs (model classes like
ChatOpenAI), Chains (pipelines of prompts + models), Agents (LLM-driven tool use), and Memory (stateful chat history). - LangChain uses an expression language (LCEL) to compose runnables. You can
.invoke()or.stream()chains. Under the hood it manages message formatting and tooling. - Pros: Modular, lots of integrations, good for rapid AI app dev. Cons: Docs/TS sometimes lag Python version, and type safety can be tricky.
- Vercel AI SDK: Use the built-in
LangChainAdapterto stream LangChain results to your client. For example, pipe a ChatOpenAI.stream()toLangChainAdapter.toDataStreamResponse. - Vector Embeddings: Convert text to vectors (e.g.
OpenAIEmbeddings) and store in a vector DB (like MemoryVectorStore or Pinecone). Useful for semantic search and RAG. - AstraDB Example:
AstraDBVectorStore.fromDocuments(docs, new OpenAIEmbeddings(), astraConfig)creates a DB-backed vector store. - Keep an eye on docs – LangChain and Vercel SDK are evolving. Check official guides and stay updated.
Happy chaining!
Summary
LangChain provides a high-level toolkit for building AI workflows in JavaScript/TypeScript. You install the langchain npm package, then import models and prompts to craft chains of operations. Core abstractions include PromptTemplates for formatting, LLM classes for model access, and Chains to glue them together. Agents add dynamic tool-calling via a feedback loop, and Memory classes let your app remember context. LangChain handles the data flow (through LCEL pipelines) and integrates with many services.
Advantages are rapid composition and a rich ecosystem. Disadvantages include occasional complexity and the need to manage TypeScript types carefully. When deploying on Vercel/Next, the LangChainAdapter lets you hook a LangChain stream into the Vercel AI SDK (for example using useCompletion). For advanced apps, use vector embeddings (e.g. OpenAIEmbeddings) stored in a vector store for similarity search. You can use MemoryVectorStore for in-memory tests or managed databases like AstraDB for production.
In summary: LangChain is like a Swiss Army knife for LLM apps in JS/TS. It abstracts away boilerplate, enabling you to focus on logic. Just remember to stay updated with the docs, as both LangChain and adapters (like on Vercel) are actively improving.
Sources
- LangChain JS Documentation (Getting Started, Core Concepts)
- LangChain API Reference (Classes like LLMChain, VectorStore)
- Vercel AI SDK – LangChain Adapter (Integration example)
- DataStax: LangChain.js with Astra DB Serverless (AstraDBVectorStore usage)
- Octomind Blog: On Type Safety in LangChain TS (JS/TS pros/cons)
Share this article
Hey if you liked this article by me! Please share it with your friends on social media. I would really appreciate it!