← Back to SE Resources
Platform Fundamentals

🏗️ Platform Fundamentals

The Vercel mental model, three compute runtimes, Fluid Compute, and Next.js rendering strategies — the foundational knowledge every SE needs.

The Vercel Platform — Mental Model

Vercel is a Developer Experience (DX) Platform — not just hosting. Four layers:

Developer Experience Layer

Git push → auto-build → preview URL → production. v0 (AI app builder), Vercel Agent, CI/CD pipeline.

Compute Layer

Serverless Functions, Edge Runtime, Fluid Compute. Vercel Functions (unified umbrella since mid-2025).

Edge Network Layer

Global CDN, Edge Cache, Routing Middleware. Edge Config (global KV), Image Optimization.

AI Cloud Layer

AI SDK, AI Gateway, Vercel Sandbox, Workflows. v0, use-workflow (durable workflows).

Core Value Proposition:

"From code to globally distributed, framework-optimised infrastructure in one git push — with zero configuration."

Compute — The Three Runtimes

Vercel Functions come in two runtimes. Understanding the difference is essential for every architecture conversation.

Feature⚡ Edge Runtime🖥️ Serverless (Node.js)
EngineV8 isolatesNode.js (full)
LocationCDN PoPs (100+ globally)Regional data centres
Cold startNear-zero100–500ms (mitigated by Fluid)
Max CPU35msMinutes (plan-dependent)
Max memory128MBUp to 3GB
npm packagesWeb API compatible onlyAll packages
File systemNoYes (ephemeral)
Database accessNo (use Edge Config)Yes

⚡ Edge Runtime — Best For

  • Auth checks, A/B testing, feature flags
  • Geo-based redirects and personalisation
  • Request/response header manipulation
  • Rate limiting (simple, IP-based)

🖥️ Serverless — Best For

  • Server-side rendering (SSR pages)
  • API routes with database queries
  • AI/LLM inference calls
  • File processing, image generation

Fluid Compute — The 2025 Game Changer

Vercel's biggest architectural evolution. Enabled by default for new projects from April 23, 2025.

Shared Instances

Multiple invocations share the same physical instance concurrently. Think 'mini-servers' instead of single-use functions. Eliminates cold start overhead.

Active CPU Pricing

Pay only for milliseconds your code executes on CPU — not for I/O wait. A 10s LLM stream with 200ms CPU = 200ms billing instead of 10s.

Bytecode Caching

V8 bytecode is cached across invocations in production, eliminating parse/compile overhead on repeated executions.

Error Isolation

Unhandled errors in one concurrent request do not crash other requests sharing the same instance.

Multi-Region Failover

Enterprise: failover to another AZ in the same region first, then to the next closest region if entire region is down.

Cost Impact Example:

Traditional: 10s LLM streaming = 10s billing

Fluid: 10s streaming, 200ms CPU = 200ms billing → 80-90% cost reduction for AI routes

Next.js Rendering Strategies

The core technical topic for any Vercel SE. Know when to use each and diagnose when a customer is using the wrong one.

SSG (Static Site Generation)

Pages rendered at build time. HTML generated once, served from CDN everywhere.

When to use: Marketing pages, landing pages, blog posts, docs

  • Maximum CDN cache hit rate
  • Best SEO performance
  • Millisecond responses globally
  • ⚠️ Requires rebuild to update
  • ⚠️ Not for personalised content
// Default in App Router for pages without dynamic data
export default async function Page() {
  const data = await fetch('https://api.example.com/data', {
    cache: 'force-cache'
  });
  return <div>{data.content}</div>;
}

ISR (Incremental Static Regeneration)

Static pages auto-regenerated in background when cache expires — no full rebuild needed.

When to use: E-commerce products, editorial content, large sites (100k+ pages)

  • Static speed + dynamic freshness
  • On-demand revalidation via webhooks
  • Global CDN distribution on Vercel
  • ⚠️ Brief staleness window
  • ⚠️ Each revalidation invokes a function
// Time-based revalidation
export default async function Page() {
  const data = await fetch('https://api.example.com/products', {
    next: { revalidate: 3600 } // regenerate after 1 hour
  });
  return <ProductList products={data} />;
}

SSR (Server-Side Rendering)

Pages rendered on every request using a Vercel Function. Data is always fresh.

When to use: Authenticated dashboards, real-time data, search results

  • Always-fresh data
  • Request-specific (cookies, headers, geo)
  • User-specific content
  • ⚠️ Every request invokes a function (cost)
  • ⚠️ Higher TTFB than static
export const dynamic = 'force-dynamic';

export default async function Page({ searchParams }) {
  const user = await auth();
  const data = await db.query(
    'SELECT * FROM orders WHERE user_id = $1',
    [user.id]
  );
  return <Dashboard orders={data} />;
}

RSC (React Server Components)

Components render on the server with zero client-side JS. Compose server + client freely.

When to use: Complex pages mixing static + dynamic, reducing client JS bundle

  • Zero JS shipped for server components
  • Direct DB access without API
  • Streaming with Suspense
  • ⚠️ No interactivity in server components
  • ⚠️ Requires 'use client' for event handlers
// Server Component (default) — zero client JS
async function ProductDetails({ id }) {
  const product = await db.getProduct(id);
  return <div>{product.name}</div>;
}

// Client Component — for interactivity
'use client';
function AddToCart({ productId }) {
  const [count, setCount] = useState(0);
  return <button onClick={() => setCount(c + 1)}>Add</button>;
}

Decision Matrix

RequirementStrategy
Marketing site, blog, docsSSG
E-commerce products, editorial contentISR
Authenticated dashboards, user-specific dataSSR or RSC
Real-time data (prices, scores, alerts)SSR (short cache or no-store)
Complex page with mix of static + dynamicRSC with Suspense
Global personalisation by segmentEdge Middleware + ISR
Per-user personalisationSSR + cookies