Build AI-powered room styling apps with Gemini vision + image generation, Unkey rate limiting, Assistant UI chat components, and Next.js App Router
npx @senso-ai/shipables install batra98/spacelift-ai-room-stylistUse this skill when building an AI-powered interior design or room styling application that:
User uploads photo
→ Gemini 2.5 Flash (vision + chat + tool orchestration)
→ search_furniture tool → /api/products (relevance-scored search)
→ render_design tool → /api/edit-room (Gemini 3.1 Flash image gen)
→ Assistant UI (streaming chat rendering)
→ Unkey (rate limiting on expensive render calls)
| Layer | Tool |
|---|---|
| Framework | Next.js 16 App Router, React 19 |
| AI Chat | Vercel AI SDK v6 (ai, @ai-sdk/google) |
| Chat UI | @assistant-ui/react, @assistant-ui/react-ai-sdk, @assistant-ui/react-markdown |
| Vision + Text | Gemini 2.5 Flash |
| Image Generation | Gemini 3.1 Flash Image Preview (gemini-2.0-flash-exp) |
| Rate Limiting | @unkey/ratelimit (per-IP throttling) |
| API Key Auth | @unkey/api (key verification with fail-open) |
| Styling | Tailwind CSS 4, Framer Motion |
Always create the Google provider at module scope and set maxDuration for long-running image generation:
import { createGoogleGenerativeAI } from "@ai-sdk/google";
import { streamText, tool, stepCountIs } from "ai";
import { z } from "zod";
export const maxDuration = 60;
const google = createGoogleGenerativeAI({
apiKey: process.env.GEMINI_API_KEY || "",
});
Define tools with zodSchema for type-safe parameters. Use stepCountIs for multi-step reasoning:
const result = streamText({
model: google("gemini-2.5-flash-preview-05-20"),
system: "You are an interior design expert...",
messages,
tools: {
search_furniture: tool({
description: "Search the furniture catalog",
parameters: zodSchema(z.object({
queries: z.array(z.string()).describe("2-4 search queries"),
})),
execute: async ({ queries }) => {
// Call your product search API
},
}),
render_design: tool({
description: "Render the room with selected products",
parameters: zodSchema(z.object({
productIds: z.array(z.string()),
roomImageBase64: z.string(),
})),
execute: async ({ productIds, roomImageBase64 }) => {
// Call your image editing API
},
}),
},
maxSteps: 3,
stopWhen: stepCountIs(3),
});
Always rate-limit expensive AI operations (image generation). Use fail-open so the app works without Unkey configured:
import { Ratelimit } from "@unkey/ratelimit";
const ratelimit = process.env.UNKEY_ROOT_KEY
? new Ratelimit({
rootKey: process.env.UNKEY_ROOT_KEY,
namespace: "spacelift.render",
limit: 5,
duration: "1h",
})
: null;
// In your API handler:
if (ratelimit) {
const ip = req.headers.get("x-forwarded-for") ?? "anonymous";
const { success } = await ratelimit.limit(ip);
if (!success) return new Response("Rate limited", { status: 429 });
}
The AI SDK v6 changed message formats. Tool results require structured output objects. Always filter incomplete tool calls:
// Tool results must be wrapped in structured format
const toolResultOutput = {
type: "json" as const,
value: rawOutput ?? null,
};
// Only include completed tool calls (those with output) in history
const completedToolParts = toolParts.filter(p => p.output !== undefined);
When using Gemini for room editing, be extremely specific about placement rules per product category:
const prompt = `Edit this room photo to naturally place these items:
${products.map(p => `- ${p.name} (${p.category})`).join("\n")}
PLACEMENT RULES:
- Rugs: flat on the floor with proper perspective
- Wall art: on walls at eye level (5-6 feet)
- Floor lamps: standing on the floor beside furniture
- Plants: on surfaces or floor corners
- Maintain existing room lighting, shadows, and perspective`;
For small catalogs, word-level tokenization with weighted scoring works well without a vector database:
function scoreProduct(product: Product, queryWords: string[]): number {
let score = 0;
for (const word of queryWords) {
const w = word.toLowerCase();
if (product.name.toLowerCase().includes(w)) score += 3;
if (product.category.toLowerCase().includes(w)) score += 2;
if (product.tags.some(t => t.toLowerCase().includes(w))) score += 1;
}
return score;
}
Separate "original upload" state from "working image" state to prevent before/after showing identical images:
const [roomImage, setRoomImage] = useState<string | null>(null); // current working image
const [originalRoomImage, setOriginalRoomImage] = useState<string | null>(null); // never overwritten
const handleImageUpload = (dataUrl: string) => {
setRoomImage(dataUrl);
setOriginalRoomImage(dataUrl); // set once on upload
};
const handleRoomEdited = (base64: string) => {
setRoomImage(`data:image/jpeg;base64,${base64}`); // update working image only
};
output — wrap in { type: "json", value: ... }output !== undefinedlocalhost for internal API calls in production — use process.env.VERCEL_URL or APP_URLGEMINI_API_KEY= # Required — Google AI Studio API key
UNKEY_ROOT_KEY= # Optional — Unkey root key for rate limiting (fail-open without it)
NEXT_PUBLIC_BASE_URL= # Optional — public URL for internal API calls