LangChain
LangChain gives you tools for every step of the agent development lifecycle. This guide demonstrates how to integrate Vercel AI Gateway with LangChain to access various AI models and providers.
First, create a new directory for your project and initialize it:
terminalmkdir langchain-ai-gateway cd langchain-ai-gateway pnpm dlx init -y
Install the required LangChain packages along with the
dotenv
and@types/node
packages:pnpm i langchain @langchain/core @langchain/openai dotenv @types/node
Create a
.env
file with your Vercel AI Gateway API key:.envAI_GATEWAY_API_KEY=your-api-key-here
If you're using the AI Gateway from within a Vercel deployment, you can also use the
VERCEL_OIDC_TOKEN
environment variable which will be automatically provided.Create a new file called
index.ts
with the following code:index.tsimport 'dotenv/config'; import { ChatOpenAI } from '@langchain/openai'; import { HumanMessage } from '@langchain/core/messages'; async function main() { console.log('=== LangChain Chat Completion with AI Gateway ==='); const apiKey = process.env.AI_GATEWAY_API_KEY || process.env.VERCEL_OIDC_TOKEN; const chat = new ChatOpenAI({ apiKey: apiKey, modelName: 'openai/gpt-4o-mini', temperature: 0.7, configuration: { baseURL: 'https://ai-gateway.vercel.sh/v1', }, }); try { const response = await chat.invoke([ new HumanMessage('Write a one-sentence bedtime story about a unicorn.'), ]); console.log('Response:', response.content); } catch (error) { console.error('Error:', error); } } main().catch(console.error);
The following code:
- Initializes a
ChatOpenAI
instance configured to use the AI Gateway - Sets the model
temperature
to0.7
- Makes a chat completion request
- Handles any potential errors
- Initializes a
Run your application using Node.js:
pnpm dlx tsx index.ts
You should see a response from the AI model in your console.
Was this helpful?