Prerequisites

The Boilerplate comes with the OpenAI API integration of Lanchain from the start. To use LangChain, you need to:

  • Get your OpenAI API key from the OpenAI API website.

Setup

  1. Add your OpenAI API key to the .env file.
.env
OPENAI_API_KEY=your-openai-api-key
  1. The boilerplate already includes the necessary configuration for LangChain in src/lib/ai/langchain.ts. This file initializes the OpenAI language model client.

Usage

To use LangChain in your application, you can create an API route that interacts with the language model. Here’s an example of how to set up a route that uses the OpenAI model for text generation:

Advanced Usage

LangChain offers many powerful features beyond simple text generation. Here are a few examples:

Chains

Chains allow you to combine multiple steps of processing. For example, you can create a chain that generates a question and then answers it:

import { LLMChain } from "langchain/chains";
import { loadDocuments } from "@langchain/core";
import { llm } from "@/lib/ai/langchain";

const documents = await loadDocuments("path/to/your/documents.txt");

const qaChain = new LLMChain({
  llm,
  prompt: {
    template: "Question: {question}\nAnswer the question based on the provided context:\n{context}",
    inputVariables: ["question", "context"],
  },
});

const answer = await qaChain.invoke({
  question: "What is the main topic of the documents?",
  context: documents.pageContent.join("\n\n"),
});
console.log({ answer });

Agents

Agents can use tools to gather information and make decisions. Here’s a simple example using a calculator tool:

import { initializeAgentExecutorWithOptions } from "langchain/agents";
import { Calculator } from "@langchain/community/tools/calculator";
import { llm } from "@/lib/ai/langchain";

const tools = [new Calculator()];

const executor = await initializeAgentExecutorWithOptions(tools, llm, {
    agentType: "zero-shot-react-description",
    verbose: true,
});

const input = `What is the square root of 256 multiplied by 2?`;

const result = await executor.invoke({ input });

console.log(`Got output ${JSON.stringify(result, null, 2)}`);

Model Selection

If you want to use a different model, you can change the model or provider in the src/lib/ai/langchain.ts file.

You can explore the available models on the Langchain Website.

Was this page helpful?