Frontend Interview Questions

Frontend interviews for people with a few years under their belt are less about reciting definitions and more about showing you can reason about real browsers: main-thread pressure, network waterfalls, state that leaks across routes, and components that looked fine in Storybook but cried in production. Hiring managers want evidence you partner with design and backend—not that you memorize every hook API.

The best candidates mix mechanics with empathy. They can explain why a list with five thousand rows hurt checkout completion, what they measured before optimizing, and how they rolled out a fix without freezing the roadmap. If your answers sound like blog posts from 2018, refresh them with something you shipped in the last year—even a small accessibility fix or bundle split counts.

Work through the topic hubs below in any order, but pair each with a user story: who was blocked, what you changed, how you verified it (Lighthouse, RUM, error rates, support tickets). That is the difference between "I know useMemo" and "I know when useMemo is a distraction."

Trending Sub-topics

What "senior frontend" usually means in the room

You own ambiguity. Interviewers listen for how you slice work, document trade-offs, and teach others without being patronizing. Be ready to defend TypeScript boundaries, testing strategy (what you snapshot vs what you integration-test), and how you keep feature velocity when tech debt whispers from every corner.

Practice out loud, not only in the editor

  • Explain a bug you fixed as if the listener has never seen your codebase.
  • Walk through a performance win with before/after metrics—even rough ones.
  • Describe how you would onboard a new hire to your component library in a week.

Accessibility and performance are friends

Interviewers increasingly connect keyboard support, focus management, and perceived speed. If you can describe one feature where improving semantics also improved metrics, you will stand out. If you have never thought about that overlap, spend an hour with your last PR and look for it—you will find something honest to say.

Frontend code you can relate to Azure and real bugs

These are the kinds of snippets you scribble while saying, "this is how we stopped double-submitting invoices to Dynamics" or "this is how we stopped burning mobile data on repeat fetches."

Abortable fetch to an Azure-hosted API (React)

useEffect(() => {
  const ac = new AbortController();
  (async () => {
    const url = '/api/tenants/' + tenantId + '/usage';
    const res = await fetch(url, {
      signal: ac.signal,
      headers: { Accept: 'application/json' }
    });
    if (!res.ok) throw new Error('usage ' + res.status);
    setUsage(await res.json());
  })().catch((e) => {
    if (e.name !== 'AbortError') setError(e);
  });
  return () => ac.abort();
}, [tenantId]);

Same pattern applies when the browser talks to Azure API Management, App Service, or Container Apps behind Entra ID. The story interviewers want: you cancel in-flight work when the user switches tenants, so you do not race older responses over newer ones—that bug loves to appear right after a deploy.

Lazy-load a heavy chart after first paint (code-splitting)

const ChartPanel = lazy(() => import('./HeavyUsageChart'));

function Dashboard() {
  return (
    <Suspense fallback={<p>Loading chart…</p>}>
      <ChartPanel data={usage} />
    </Suspense>
  );
}

Teams shipping internal portals on Azure Static Web Apps or SPAs in front of App Service hit this constantly: marketing wants a gorgeous dashboard, mobile wants fewer megabytes. You split the bundle, measure LCP, and maybe prefetch after idle—say that sequence out loud and you sound like you have shipped, not only read the React docs.

Questions with sample answers

These are interview-ready outlines—sound human by swapping in your own metrics, team names, and war stories. The examples are generic on purpose so you can map them to what you actually shipped.

  1. Primary prompt

    A data table with 4k rows feels sluggish when users type in the global filter. How do you diagnose and fix it without blaming React generically?

    Profiler: is filter input re-rendering all rows? Virtualize, debounce filter, memoize row, move filter to derived memoized list; measure before/after commit time.

  2. Primary prompt

    Explain closures by walking through a bug you have seen with stale event handlers or timers—not a textbook definition.

    Example: modal sets setInterval reading price from closure—stuck at old price; fix ref or clear/reschedule on price change.

  3. Primary prompt

    How do you decide between Context, a small global store, and colocated state for a new wizard flow?

    Colocate steps if only wizard cares; Context if many deep children need read-only theme-ish; store if complex cross-cutting with middleware—avoid prop drilling vs over-engineering.

  4. Primary prompt

    Your LCP regressed after a hero image change. What steps do you take before you propose code changes?

    Verify dimensions intrinsic, check new file size/format, CDN cache hit, priority fetch, compare waterfall in Lighthouse—might be CMS uploaded 6MB PNG.

Follow-ups interviewers often ask

Expect nested "why?" questions—brief answers here; expand with your production defaults.

  1. Follow-up

    How would you prove the fix with RUM vs only Lighthouse in CI?

    RUM captures real devices/networks; CI lab is consistent but not representative—ship canary + compare p75 LCP in analytics.

  2. Follow-up

    What breaks if we server-render this component—what moves to the client boundary?

    Browser APIs, hooks, event handlers need client component; data fetch may move to server; watch hydration mismatch for random IDs.

  3. Follow-up

    How do you test this UI for keyboard-only users and screen reader announcements?

    Manual axe + VoiceOver/NVDA, RTL queries by role, focus order tests, live regions for async updates.

  4. Follow-up

    What is your rollback plan if the performance patch introduces a visual regression?

    Feature flag instant off; visual diff tests in CI; monitor errors; keep previous bundle artifact.

  5. Follow-up

    Where would you add observability so the next regression is obvious within minutes?

    RUM dashboards per route, bundle version tag, LCP element attribution, log hydration errors with release SHA.

Related Interview Guides