Prerequisites

The Boilerplate comes with the OpenAI API integration of Lanchain from the start. To use LangChain, you need to:

  • Get your OpenAI API key from the OpenAI API website.

Setup

  1. Add your OpenAI API key to the .env file.
.env
OPENAI_API_KEY=your-openai-api-key
  1. The boilerplate already includes the necessary configuration for LangChain in src/lib/ai/langchain.ts. This file initializes the OpenAI language model client.

Usage

To use LangChain in your application, you can create an API route that interacts with the language model. Here’s an example of how to set up a route that uses the OpenAI model for text generation:

import { kv } from '@vercel/kv';
import { llm } from '@/lib/ai/langchain';
import { Ratelimit } from '@upstash/ratelimit';
import { NextRequest, NextResponse } from 'next/server';
import { PromptTemplate } from "@langchain/core/prompts";// Allow streaming responses up to 30 seconds

export const maxDuration = 30;

// Create Rate limit
const ratelimit = new Ratelimit({
    redis: kv,
    limiter: Ratelimit.fixedWindow(5, '30s'),
});

export async function POST(req: NextRequest) {
    // call ratelimit with request ip
    const { success, remaining } = await ratelimit.limit(req.ip ?? 'ip');

    // block the request if unsuccessful
    if (!success) return new Response('Rate limit exceeded', { status: 429 });

    const { topic } = await req.json();

    try {
        // Create a prompt template
        const template = "Write a short paragraph about {topic}.";
        const prompt = PromptTemplate.fromTemplate(template);

        // Generate the text
        const formattedPrompt = await prompt.format({ topic });
        const result = await llm.call(formattedPrompt);

        return NextResponse.json({ generatedText: result });
    } catch (error) {
        console.error('Error generating text:', error);
        return NextResponse.json({ error: 'Failed to generate text' }, { status: 500 });
    }
}

Advanced Usage

LangChain offers many powerful features beyond simple text generation. Here are a few examples:

Chains

Chains allow you to combine multiple steps of processing. For example, you can create a chain that generates a question and then answers it:

import { LLMChain } from "langchain/chains";
import { loadDocuments } from "@langchain/core";
import { llm } from "@/lib/ai/langchain";

const documents = await loadDocuments("path/to/your/documents.txt");

const qaChain = new LLMChain({
  llm,
  prompt: {
    template: "Question: {question}\nAnswer the question based on the provided context:\n{context}",
    inputVariables: ["question", "context"],
  },
});

const answer = await qaChain.invoke({
  question: "What is the main topic of the documents?",
  context: documents.pageContent.join("\n\n"),
});
console.log({ answer });

Agents

Agents can use tools to gather information and make decisions. Here’s a simple example using a calculator tool:

import { initializeAgentExecutorWithOptions } from "langchain/agents";
import { Calculator } from "@langchain/community/tools/calculator";
import { llm } from "@/lib/ai/langchain";

const tools = [new Calculator()];

const executor = await initializeAgentExecutorWithOptions(tools, llm, {
    agentType: "zero-shot-react-description",
    verbose: true,
});

const input = `What is the square root of 256 multiplied by 2?`;

const result = await executor.invoke({ input });

console.log(`Got output ${JSON.stringify(result, null, 2)}`);

Model Selection

If you want to use a different model, you can change the model or provider in the src/lib/ai/langchain.ts file.

You can explore the available models on the Langchain Website.

Was this page helpful?