AI & MLFeatured

AI-Driven UI Components: Building Intelligent Interfaces with React and LLMs

Learn how to integrate Large Language Models into React applications to create AI-powered UI components including smart search, auto-complete, content generation, and conversational interfaces.

Sameer Sabir
Updated:
14 min read
AIReactLLMOpenAINext.jsUI ComponentsVercel AI SDK

AI-Driven UI Components: Building Intelligent Interfaces with React and LLMs

The integration of Large Language Models (LLMs) into frontend applications is no longer experimental — it's becoming a standard practice. In 2026, users expect intelligent, context-aware interfaces. This guide shows you how to build AI-powered UI components using React and modern AI tooling.

The AI-First Frontend Stack

Here's the modern stack for building AI-driven interfaces:

  • React 19 + Next.js 15: Server Components for efficient AI processing
  • Vercel AI SDK: Streaming responses with React Server Components
  • OpenAI / Anthropic APIs: LLM providers
  • TanStack Query: Caching AI responses efficiently
  • Zod: Schema validation for AI outputs

Setting Up the Vercel AI SDK

npm install ai @ai-sdk/openai @ai-sdk/react
// lib/ai.ts
import { createOpenAI } from "@ai-sdk/openai";

export const openai = createOpenAI({
  apiKey: process.env.OPENAI_API_KEY!,
});

Building an AI Chat Component

Let's build a conversational interface that streams responses in real-time:

// components/AIChatBox.tsx
"use client";

import { useChat } from "@ai-sdk/react";

export function AIChatBox() {
  const { messages, input, handleInputChange, handleSubmit, isLoading } = useChat({
    api: "/api/chat",
  });

  return (
    <div className="flex flex-col h-96 bg-gray-900 rounded-xl">
      <div className="flex-1 overflow-y-auto p-4 space-y-4">
        {messages.map((message) => (
          <div
            key={message.id}
            className={`flex ${
              message.role === "user" ? "justify-end" : "justify-start"
            }`}
          >
            <div
              className={`max-w-[80%] rounded-lg px-4 py-2 ${
                message.role === "user"
                  ? "bg-blue-600 text-white"
                  : "bg-gray-800 text-gray-200"
              }`}
            >
              {message.content}
            </div>
          </div>
        ))}
      </div>
      
      <form onSubmit={handleSubmit} className="p-4 border-t border-gray-800">
        <div className="flex gap-2">
          <input
            value={input}
            onChange={handleInputChange}
            placeholder="Ask anything..."
            className="flex-1 bg-gray-800 text-white rounded-lg px-4 py-2"
            disabled={isLoading}
          />
          <button
            type="submit"
            disabled={isLoading}
            className="bg-blue-600 text-white px-4 py-2 rounded-lg"
          >
            {isLoading ? "..." : "Send"}
          </button>
        </div>
      </form>
    </div>
  );
}

The API Route with Streaming

// app/api/chat/route.ts
import { streamText } from "ai";
import { openai } from "@/lib/ai";

export async function POST(req: Request) {
  const { messages } = await req.json();

  const result = streamText({
    model: openai("gpt-4o"),
    system: "You are a helpful assistant specialized in web development.",
    messages,
  });

  return result.toDataStreamResponse();
}

Smart Search with AI

Build a search component that understands natural language queries:

"use client";

import { useState, useCallback } from "react";
import { useCompletion } from "@ai-sdk/react";
import { debounce } from "lodash-es";

interface SearchResult {
  title: string;
  description: string;
  url: string;
  relevance: number;
}

export function AISearch({ data }: { data: SearchResult[] }) {
  const [results, setResults] = useState<SearchResult[]>([]);
  
  const { complete, isLoading } = useCompletion({
    api: "/api/search",
    onFinish: (_, completion) => {
      try {
        const parsed = JSON.parse(completion);
        setResults(parsed.results);
      } catch {
        setResults([]);
      }
    },
  });

  const debouncedSearch = useCallback(
    debounce((query: string) => {
      if (query.length > 2) {
        complete(query);
      }
    }, 300),
    [complete]
  );

  return (
    <div className="relative">
      <input
        type="text"
        placeholder="Search with natural language..."
        onChange={(e) => debouncedSearch(e.target.value)}
        className="w-full bg-gray-800 rounded-lg px-4 py-3 text-white"
      />
      
      {isLoading && (
        <div className="absolute right-3 top-3">
          <div className="animate-spin h-5 w-5 border-2 border-blue-500 rounded-full border-t-transparent" />
        </div>
      )}

      {results.length > 0 && (
        <div className="absolute top-full mt-2 w-full bg-gray-800 rounded-lg shadow-xl z-50">
          {results.map((result, index) => (
            <a
              key={index}
              href={result.url}
              className="block p-4 hover:bg-gray-700 transition-colors"
            >
              <h4 className="text-white font-medium">{result.title}</h4>
              <p className="text-gray-400 text-sm mt-1">{result.description}</p>
            </a>
          ))}
        </div>
      )}
    </div>
  );
}

AI-Powered Content Suggestions

Create a textarea that suggests content as users type:

"use client";

import { useState, useRef } from "react";
import { useCompletion } from "@ai-sdk/react";

export function AITextArea() {
  const [value, setValue] = useState("");
  const [suggestion, setSuggestion] = useState("");
  const textareaRef = useRef<HTMLTextAreaElement>(null);

  const { complete } = useCompletion({
    api: "/api/suggest",
    onFinish: (_, completion) => {
      setSuggestion(completion);
    },
  });

  const handleKeyDown = (e: React.KeyboardEvent) => {
    if (e.key === "Tab" && suggestion) {
      e.preventDefault();
      setValue(value + suggestion);
      setSuggestion("");
    }
  };

  const handleChange = (e: React.ChangeEvent<HTMLTextAreaElement>) => {
    const newValue = e.target.value;
    setValue(newValue);
    setSuggestion("");

    // Trigger suggestion after user pauses
    if (newValue.length > 20) {
      complete(newValue);
    }
  };

  return (
    <div className="relative">
      <textarea
        ref={textareaRef}
        value={value}
        onChange={handleChange}
        onKeyDown={handleKeyDown}
        className="w-full bg-gray-800 text-white rounded-lg p-4 min-h-[200px] resize-none"
        placeholder="Start writing..."
      />
      {suggestion && (
        <div className="absolute bottom-4 left-4 right-4 text-gray-500 pointer-events-none">
          <span className="invisible">{value}</span>
          <span className="text-gray-600">{suggestion}</span>
          <span className="text-xs text-gray-500 ml-2">(Tab to accept)</span>
        </div>
      )}
    </div>
  );
}

Structured Output with AI

Use Zod schemas to get structured data from AI:

// app/api/analyze/route.ts
import { generateObject } from "ai";
import { openai } from "@/lib/ai";
import { z } from "zod";

const SentimentSchema = z.object({
  sentiment: z.enum(["positive", "negative", "neutral"]),
  confidence: z.number().min(0).max(1),
  keywords: z.array(z.string()),
  summary: z.string(),
  actionItems: z.array(z.object({
    task: z.string(),
    priority: z.enum(["low", "medium", "high"]),
  })),
});

export async function POST(req: Request) {
  const { text } = await req.json();

  const result = await generateObject({
    model: openai("gpt-4o"),
    schema: SentimentSchema,
    prompt: `Analyze the following customer feedback: "${text}"`,
  });

  return Response.json(result.object);
}

Performance Best Practices

1. Stream Everything

Never wait for the full AI response. Always stream:

// Use streamText instead of generateText
const result = streamText({
  model: openai("gpt-4o"),
  messages,
});

return result.toDataStreamResponse();

2. Cache AI Responses

Use React's cache or TanStack Query to avoid redundant AI calls:

import { cache } from "react";

export const getProductDescription = cache(async (productId: string) => {
  const result = await generateText({
    model: openai("gpt-4o-mini"),
    prompt: `Generate a description for product ${productId}`,
  });
  return result.text;
});

3. Use Smaller Models for Simple Tasks

Not every AI task needs GPT-4o. Use lighter models for simple tasks:

// Simple classification - use a smaller model
const result = await generateObject({
  model: openai("gpt-4o-mini"),
  schema: CategorySchema,
  prompt: `Classify this text: "${text}"`,
});

// Complex reasoning - use a larger model
const analysis = await generateText({
  model: openai("gpt-4o"),
  prompt: complexAnalysisPrompt,
});

Accessibility Considerations

AI-powered components need extra accessibility attention:

  1. Always show loading states with proper aria-busy attributes
  2. Announce AI-generated content via aria-live regions
  3. Provide fallbacks when AI is unavailable
  4. Allow manual editing of AI-generated content
  5. Be transparent about AI-generated content
<div
  role="status"
  aria-live="polite"
  aria-busy={isLoading}
>
  {isLoading ? (
    <span className="sr-only">Generating response...</span>
  ) : (
    <div>{response}</div>
  )}
</div>

Conclusion

AI-driven UI components are transforming how users interact with web applications. By leveraging the Vercel AI SDK with React 19 and Next.js 15, you can build intelligent interfaces that feel natural and responsive. The key is to stream everything, cache intelligently, and always maintain accessibility standards. Start with a simple chat component and progressively enhance your application with more sophisticated AI features.

Found this blog helpful? Have questions or suggestions?

Related Blogs