Launch Express provides seamless integration with LangChain, allowing you to build powerful language model applications in your NextJS project. This guide will walk you through the process of setting up and using LangChain in your application.
The Starter-Kit already includes the necessary configuration for LangChain in src/lib/ai/langchain.ts, if you have selected the LangChain Provider via the CLI. This file initializes the OpenAI language model client.
To use LangChain in your application, you can create an API route that interacts with the language model. Here’s an example of how to set up a route that uses the OpenAI model for text generation:
Copy
import { kv } from '@vercel/kv';import { llm } from '@/lib/ai/langchain';import { Ratelimit } from '@upstash/ratelimit';import { NextRequest, NextResponse } from 'next/server';import { PromptTemplate } from "@langchain/core/prompts";// Allow streaming responses up to 30 secondsexport const maxDuration = 30;// Create Rate limitconst ratelimit = new Ratelimit({ redis: kv, limiter: Ratelimit.fixedWindow(5, '30s'),});export async function POST(req: NextRequest) { // call ratelimit with request ip const { success, remaining } = await ratelimit.limit(req.ip ?? 'ip'); // block the request if unsuccessful if (!success) return new Response('Rate limit exceeded', { status: 429 }); const { topic } = await req.json(); try { // Create a prompt template const template = "Write a short paragraph about {topic}."; const prompt = PromptTemplate.fromTemplate(template); // Generate the text const formattedPrompt = await prompt.format({ topic }); const result = await llm.call(formattedPrompt); return NextResponse.json({ generatedText: result }); } catch (error) { console.error('Error generating text:', error); return NextResponse.json({ error: 'Failed to generate text' }, { status: 500 }); }}
Chains allow you to combine multiple steps of processing. For example, you can create a chain that generates a question and then answers it:
Copy
import { LLMChain } from "langchain/chains";import { loadDocuments } from "@langchain/core";import { llm } from "@/lib/ai/langchain";const documents = await loadDocuments("path/to/your/documents.txt");const qaChain = new LLMChain({ llm, prompt: { template: "Question: {question}\nAnswer the question based on the provided context:\n{context}", inputVariables: ["question", "context"], },});const answer = await qaChain.invoke({ question: "What is the main topic of the documents?", context: documents.pageContent.join("\n\n"),});console.log({ answer });