← Study Guide·🎤 Part VII: Interview Preparation
21

SE Interview Topics & Common Questions

21. SE Interview Topics & Common Questions

Based on the actual Vercel job description and field engineering focus areas.

Technical discovery questions (you ask these to customers)

  1. "What framework are you currently using, and what does your deployment workflow look like today?"
  2. "What are your current Core Web Vitals scores? Have you instrumented RUM?"
  3. "Where does your data live — what databases and CMS are you using?"
  4. "Do you have preview deployments today? How do your teams review changes before merge?"
  5. "What are your compliance requirements — SOC 2, HIPAA, PCI, GDPR?"
  6. "What's your biggest pain point with your current frontend infrastructure?"
  7. "How many pages/routes does your application have? What's your build time?"

Technical questions you'll be asked in interviews

"Explain the difference between SSR and ISR and when you'd use each."

SSR renders on every request — right for auth-gated pages where data is user-specific or must be real-time. ISR generates statically at build but regenerates in the background after a configured interval — right for content that updates periodically but doesn't need to be request-fresh. ISR delivers near-SSG performance with near-SSR data freshness.

"What is Fluid Compute and why does it matter for AI applications?"

Traditional serverless charges for entire request duration, including I/O wait. Fluid Compute bills only for active CPU time. For an LLM streaming response that takes 30 seconds but uses 200ms of CPU, traditional billing charges 30 seconds, Fluid charges 200ms — roughly a 150× cost reduction. This makes streaming AI applications economically viable on serverless.

"A customer has a slow LCP on their e-commerce homepage. Walk me through your diagnosis."

Start with Speed Insights or Lighthouse to identify the LCP element. If it's an image: check if they're using next/image with priority set, check image format (WebP/AVIF), check if the image is being optimised or served at full resolution. If LCP is slow due to TTFB: check the rendering strategy — if SSR, profile the function execution time, look for slow database queries or waterfall fetches. Consider moving to ISR with webhook revalidation.

"How would you architect a Next.js app for a media company with 1M articles?"

Use ISR with on-demand revalidation. At build time, generate the 1,000 most popular articles (fastest to load for peak traffic). All other articles generate on first request and are cached. CMS publish webhooks call revalidatePath or revalidateTag to refresh specific articles. Use generateStaticParams with a limited set for the build phase.

"What's the difference between Edge Runtime and Serverless Runtime, and when would you use each?"

Edge Runtime runs V8 isolates at CDN PoPs — millisecond cold starts, global execution, but limited API surface (no Node.js, no file system, no arbitrary npm packages). Serverless is full Node.js — any npm package, file system access, but regional and with cold start overhead. Use Edge for auth checks, A/B routing, geolocation redirects, lightweight header manipulation. Use Serverless for database queries, heavy computation, anything needing full Node.js.

Red flags in customer codebases (code audit scenarios)

These are real things you'll catch during a code audit engagement:

  1. cache: 'no-store' on every fetch() — entire site is effectively SSR
  2. 'use client' on root layout — entire app hydrates client-side, defeats RSC
  3. Fetching data in Client Components that should be in Server Components
  4. Large images served without next/image (no optimisation, no lazy loading)
  5. No cache tags — can't do targeted invalidation, forced to use revalidatePath('/') on everything
  6. API routes in pages/api not migrated to App Router Route Handlers
  7. Third-party fonts loaded via <link> rather than next/font — causes CLS
  8. Environment variables prefixed NEXT_PUBLIC_ for secrets — exposed to browser