Integrating OpenAI API in a Next.js App — Lessons from the MRV Platform
When I integrated OpenAI into the MRV agricultural carbon platform, I expected the hard part to be prompt engineering. The actual hard part was everything around the API call — rate limits, error handling, cost control, and user experience when AI is slow.
Wrap every API call in a server action
Never call the OpenAI API from a Client Component. The API key is a secret. Use a Next.js server action or Route Handler. This is table stakes.
// app/actions/analyze.ts
'use server';
import OpenAI from 'openai';
const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
export async function analyzeFieldActivity(activityData: string) {
const response = await openai.chat.completions.create({
model: 'gpt-4o-mini',
messages: [
{ role: 'system', content: 'You are an expert in agricultural carbon measurement.' },
{ role: 'user', content: activityData },
],
max_tokens: 500,
});
return response.choices[0]?.message?.content ?? null;
}Design prompts with structured output in mind
Asking for free-text output and then parsing it is fragile. Ask for JSON explicitly and validate the response with Zod. On the MRV platform, every AI response was validated against a schema before being stored in the database.
Rate limits will hit you in production
OpenAI's tier-one rate limits are more restrictive than you expect. Implement exponential backoff. Cache results aggressively — if the same field activity description comes in twice, return the cached analysis rather than hitting the API again. In production, caching reduced API calls by over 40%.
Cost control: always set max_tokens
It is easy to accidentally burn through a budget with uncapped completions. Set `max_tokens` conservatively, monitor usage in the OpenAI dashboard, and consider gpt-4o-mini instead of gpt-4o for high-volume, lower-complexity tasks. On MRV, switching to mini for analysis tasks reduced cost by 70% with negligible quality loss.
Graceful fallbacks are essential
AI calls fail. The network times out, the API returns an error, the model refuses. Every AI-powered feature in the MRV app has a manual fallback path. Users can always complete their task without AI assistance. This is both good UX and good engineering.