JavaScript Interview Questions and Answers — STAR Format Guide for 4+ Years Experience (2026)
February 26, 2026 Updated • By Surya Singh • JavaScript • TypeScript • ES6+ • Interview • Frontend
Key Takeaways
- 110 in-depth STAR questions covering closures, event loop, prototypes, async/await, ES6+, memory, design patterns, TypeScript, modules, security
- 2Written for developers with 4+ years of production JavaScript experience
- 3Rapid-fire rounds on core language, async patterns, and tooling
- 4Includes E-E-A-T experience block, 8 common mistakes, and 8 FAQs for rich snippets
This guide covers every pillar tested in senior JavaScript interviews: closures and scope chain, the event loop (microtasks vs macrotasks), prototypal inheritance vs ES6 classes, async/await and error handling, ES6+ features in production, memory management, design patterns, TypeScript integration, module systems (CommonJS vs ESM), and security (XSS, CSRF, prototype pollution). Every answer uses the STAR method with real scenarios from production systems.
Answers reference the MDN JavaScript guide, TypeScript documentation, and the ECMAScript specification.
Table of Contents
- 1. Closures and Scope Chain — Debugging Memory Leaks
- 2. Event Loop — Microtasks vs Macrotasks
- 3. Prototypal Inheritance vs ES6 Classes
- 4. Promises, async/await, Error Handling
- 5. ES6+ Features in Production
- 6. Memory Management — Leak Detection
- 7. Design Patterns (Module, Observer, Factory, Strategy)
- 8. TypeScript Integration
- 9. Module Systems — CommonJS vs ESM
- 10. Security — XSS, CSRF, Prototype Pollution
- Rapid-Fire Practice
- From Real Experience
- Common Mistakes to Avoid
- FAQ (8 Questions)
- Related Interview Guides
1) How do you debug memory leaks caused by closures and the scope chain?
What interviewer evaluates: understanding of closures, scope chain, and GC behaviour.
Situation: A real-time analytics dashboard displayed live charts for 50 metrics. After 2 hours of use, the tab consumed 800 MB of RAM. Users reported the browser becoming sluggish. Each chart subscribed to WebSocket events via a closure that captured the entire metric history array.
Task: Identify and fix the leak without changing the feature behaviour.
Action:
- Heap snapshot diagnosis: Took Chrome DevTools heap snapshots at 10-minute intervals. Compared snapshots with the "Comparison" view. Found that
MetricChartinstances were retained because the WebSocket callback closure held a reference tometricsHistoryarray, which grew unbounded (50 metrics × 1000 samples). - Scope chain analysis: The callback was defined inside
useEffectand closed overmetricsHistory,setMetricsHistory, and the component instance. The WebSocket subscription was never cleaned up, so the closure lived for the tab's lifetime. - Fix 1 — bounded history: Limited each metric to the last 100 samples using a ring buffer:
{data: data.slice(-100)}in the state updater. This prevented unbounded array growth regardless of subscription lifetime. - Fix 2 — cleanup in useEffect: Returned a cleanup function from
useEffectthat unsubscribed from the WebSocket:return () => ws.unsubscribe(chartId). When the component unmounted, the closure became unreachable and was GC'd. - Fix 3 — WeakRef for cache: For a shared metric cache (avoid re-fetching identical metrics), switched from
MaptoWeakMapkeyed by chart component instance so entries were collected when charts unmounted.
Result: Memory stabilized at 120 MB after 4 hours. Heap snapshots showed zero retained MetricChart instances after navigating away. The ring buffer preserved chart behaviour while capping memory.
What separates good from great: Use heap snapshots to prove the leak before fixing. Show understanding of closure retention and the cleanup pattern. Mention WeakMap for cache keys tied to object lifecycle.
2) How do you diagnose issues when microtasks and macrotasks run out of order?
What interviewer evaluates: event loop internals, debugging async ordering.
Situation: A payment flow showed "Payment successful" before the confirmation API returned. The UI updated optimistically, but when the API failed, the UI had already navigated away. Users sometimes saw success then got charged-back. The bug was intermittent — it depended on network latency.
Task: Fix the ordering so the UI only updates after the API confirms success.
Action:
- Reproduction: Throttled network to "Slow 3G" in DevTools. Reproduced the bug: the success handler ran before the API response. Traced the code:
fetch()returned a Promise (microtask queue), but asetTimeoutin a dependency scheduled a macrotask that ranshowSuccess()before the microtasks flushed. - Event loop trace: The flow was: (1) fetch called, (2) setTimeout(showSuccess, 0) scheduled, (3) API response arrived, (4) microtask queue had the Promise resolution, (5) but the event loop picked the macrotask (setTimeout) first in some iterations. Actually the correct order is: sync code, then all microtasks, then one macrotask. The bug was different:
showSuccesswas being called from a.then()attached to a different promise that resolved earlier. - Root cause: Two independent async flows: one for the payment API, one for a "session keepalive" that also resolved. The keepalive resolved first; its
.then()triggered a state update that re-rendered and ran an effect that calledshowSuccess. The payment API was still pending. - Fix — single source of truth: Removed the keepalive from affecting the payment flow. The payment component had one async flow:
await submitPayment()thenshowSuccess(). No other effects could trigger success. UsedAbortControllerto cancel the payment request if the user navigated away. - Guarantee order: Ensured
showSuccesswas only called in the.then()of the payment promise, afterresponse.okcheck. Wrote a test that mocked delayed API and verified the UI did not show success before the mock resolved.
Result: Zero incorrect success displays in 3 months. The test suite caught 2 regressions where other developers introduced similar async ordering bugs. Payment charge-back rate dropped to baseline.
What separates good from great: Explain microtask vs macrotask ordering. Show how to reproduce timing bugs (network throttling, mocking). Demonstrate fixing async flow with a single, linear promise chain.
Loading...
3) How do you explain prototypal inheritance vs ES6 classes in production code?
What interviewer evaluates: understanding of prototype chain and syntactic sugar.
Situation: A legacy codebase used constructor functions and Object.create() for inheritance. A new hire tried to extend a "class" using ES6 class syntax and broke the prototype chain — methods on the parent were undefined. The team debated whether to standardize on ES6 classes or keep the old pattern.
Task: Document the equivalence and migrate one module to ES6 classes as a proof of concept.
Action:
- Prototype equivalence: Wrote a comparison doc. ES6
class Base {...}is syntactic sugar forfunction Base() {...}; Base.prototype.method = ....class Child extends BasesetsChild.prototype.__proto__ === Base.prototypeandChild.__proto__ === Base(for static). The key: both use the same prototype chain under the hood. - Migration rules: Constructor functions became
classdeclarations. Methods onBase.prototype.foobecamefoo() {...}inside the class.Base.call(this)in child constructors becamesuper(). Static methods moved tostatic method(). - Proof of concept: Migrated the
ValidationRulehierarchy (5 classes). Wrote Jest tests that verifiedinstanceof, method resolution, andsupercalls. EnsuredObject.getPrototypeOf(Child.prototype) === Base.prototype. - Edge cases: Private fields (stage 3) and
superkeyword require ES6 class syntax — no constructor-function equivalent. Documented thatextends nullcreates an object with no prototype (useful for pure data bags).
Result: The ValidationRule module migrated with zero behaviour change. The team adopted ES6 classes for new code. The comparison doc became the onboarding reference — new hires no longer mixed patterns incorrectly.
What separates good from great: Show that you understand ES6 classes are sugar. Explain super, static, and the prototype chain. Mention when constructor functions might still be preferred (e.g. no private fields in older envs).
4) How do you handle Promises, async/await, and errors at scale?
What interviewer evaluates: async error handling, concurrency control, cancellation.
Situation: A dashboard fetched 12 API endpoints in parallel. When one failed, the entire dashboard showed a generic error. Users could not see partial data. Additionally, rapid filter changes triggered 50 concurrent requests; the last response sometimes overwrote newer user intent.
Task: Support partial success (show data from succeeded endpoints, surface errors for failed ones) and prevent stale responses from overwriting newer state.
Action:
- Promise.allSettled: Switched from
Promise.alltoPromise.allSettled. Each result was{status, value?, reason?}. Mapped over results: fulfilled → render data, rejected → render error boundary for that section. The dashboard could show 11 sections with data and 1 with an error message. - Per-section error boundaries: Each section (chart, table, etc.) had its own try/catch in the data layer. Errors were captured as
{error: true, message: err.message}in state. The UI component rendered either data or an inline "Failed to load" with retry button. - Request cancellation: Used
AbortController. Each filter change created a newAbortController, passed itssignaltofetch(url, {signal}), and aborted the previous controller. Responses from aborted requests were ignored — theif (signal.aborted) returnguard in the handler prevented state updates. - Loading per section: Each section had independent
isLoadinganderror. No single global loading overlay — users saw sections populate as they resolved. - Retry with backoff: For transient errors (5xx, network), implemented exponential backoff: 1s, 2s, 4s. After 3 failures, showed the error and stopped retrying. Used a shared
fetchWithRetryutility.
Result: Dashboard usable even when 2–3 APIs were down (e.g. during deployment). Stale response bugs eliminated. Users could retry failed sections individually. P95 load time improved because slow endpoints no longer blocked the whole UI.
What separates good from great: Use allSettled for partial success. Show AbortController for cancellation. Mention per-section error boundaries and retry strategy. Avoid global try/catch that hides partial data.
5) How do you use ES6+ features (optional chaining, nullish coalescing, destructuring) safely in production?
What interviewer evaluates: modern syntax mastery and edge-case awareness.
Situation: A B2B app received API responses from 20 different backend services. Some returned null, some undefined, some {nested: null}, some omitted keys entirely. The frontend had 200+ places accessing nested properties; 15% of production errors were Cannot read property of undefined.
Task: Harden property access across the codebase without a full rewrite.
Action:
- Optional chaining: Replaced
user?.address?.citypatterns that used manual&&checks withuser?.address?.city. ESLint ruleno-optional-chainingwas disabled; instead enforced optional chaining for any access deeper than 2 levels on API data. Reduced 80 manual null checks to single expressions. - Nullish coalescing: For defaults,
value ?? 'default'only falls back whenvalueisnullorundefined. Replacedcount || 0(which treated 0 as falsy) withcount ?? 0in 12 places — critical for numeric pagination where 0 is valid. - Destructuring with defaults:
const {page = 1, limit = 20} = querygave safe defaults. For nested:const {user: {name = 'Anonymous'} } = data(with empty-object default for missing user) — but this gets complex; for deeply nested, used optional chaining plus nullish:data?.user?.name ?? 'Anonymous'. - Schema validation: At the API boundary, validated responses with Zod. Invalid shapes failed fast with clear errors. Optional chaining protected against unexpected structure, but Zod ensured we caught backend contract violations early.
- Code review checklist: Added a checklist: any new API data access must use optional chaining for depth > 2, nullish coalescing for defaults when 0/empty string is valid, and Zod for new endpoints.
Result: Cannot read property errors dropped 94%. The Zod layer caught 8 backend contract changes before they reached users. Code readability improved — nested access was self-documenting.
What separates good from great: Distinguish ?? vs || (0 and '' are valid). Show optional chaining for safe navigation. Mention schema validation at boundaries. Avoid over-destructuring deep objects.
Loading...
6) How do you detect and fix memory leaks in a JavaScript application?
What interviewer evaluates: GC understanding, tooling, remediation patterns.
Situation: A Single Page App (SPA) memory footprint grew from 80 MB on load to 1.2 GB after 30 minutes of use. The app was an admin tool — users kept it open for hours. Browser tabs crashed. Support tickets mentioned "page becomes unresponsive."
Task: Identify leak sources and reduce memory to a stable plateau.
Action:
- Heap snapshot comparison: Loaded the app, took snapshot A. Used the app for 5 minutes (navigated 10 routes, opened/closed modals), took snapshot B. Comparison view showed
Detached HTMLDivElement(count +340), retained by event listener closures. Also:Arrayobjects growing (+12,000) retained by a global cache. - Leak 1 — detached DOM: A modal component registered
addEventListeneron the document for escape key. The listener closed over the modal ref. When the modal unmounted, the DOM node was detached but the document listener kept the closure alive, which kept the ref, which kept the node. Fix:removeEventListenerin useEffect cleanup. Or useduseEvent(React 19) / AbortController for event listener lifecycle. - Leak 2 — global cache: A
userPreferencesCacheobject stored every fetched preference by userId. Keys were never removed when users logged out. Fix: UsedWeakMapkeyed by user object so entries were GC'd when the user logged out. For numeric IDs, implemented an LRU with max 100 entries and TTL eviction. - Leak 3 — interval/timer:
setIntervalin a component was not cleared on unmount. The interval callback closed over component state. Fix:useEffectreturn() => clearInterval(id). Added ESLint rule to flagsetIntervalwithout matchingclearIntervalin the same scope. - Monitoring: Added
performance.memory(Chrome) sampling in development to track heap over time. In production, used ErrorBoundary + sampling to capture heap dumps from a small % of sessions when memory exceeded threshold (Sentry integration).
Result: Memory stabilized at 95 MB after 1 hour. Detached node count stayed at 0. No more tab crashes. The ESLint rule prevented 3 new leaks in the next quarter.
What separates good from great: Use heap snapshots and comparison. Identify detached DOM + listener pattern. Show WeakMap for object-keyed cache. Emphasize cleanup in useEffect for subscriptions, intervals, listeners.
7) How do you apply design patterns (Module, Observer, Factory, Strategy) in JavaScript?
What interviewer evaluates: pattern recognition and practical application.
Situation: An e-commerce platform had 6 different payment processors (Stripe, PayPal, Adyen, etc.). Each integration was 400+ lines with duplicated validation, logging, and retry logic. Adding a new processor meant copying an existing file and replacing API calls — error-prone and slow.
Task: Refactor to a pluggable architecture where new processors could be added with minimal code.
Action:
- Strategy pattern: Defined a
PaymentStrategyinterface:{charge(amount, currency, metadata), refund(id), getStatus(id)}. Each processor implemented this interface. The payment service held aMapof strategy name to implementation.pay(processor, ...)delegated to the selected strategy. - Factory:
PaymentStrategyFactory.create(name)returned the correct strategy instance. Config drove which strategies were enabled. New processor: implement the interface, register in factory, add config — no changes to core payment flow. - Module pattern for shared logic: Created a
paymentUtilsmodule (IIFE-style in modern ESM: just exported functions) withvalidateAmount,logPaymentEvent,retryWithBackoff. Each strategy called these; no duplication. The module had no mutable shared state — pure functions. - Observer for events: Payment completion needed to trigger: (1) email receipt, (2) inventory update, (3) analytics. Used an EventEmitter:
paymentEmitter.emit('charged', payload). Listeners were registered per feature. Decoupled payment logic from side effects. Easy to add new listeners without touching payment code. - Resulting structure:
strategies/stripe.js(120 lines),strategies/paypal.js(110 lines). New Adyen: 95 lines, all strategy-specific. Core payment flow: 80 lines. Total lines reduced 40% while adding flexibility.
Result: Added 2 new payment processors in 1 day each (was 1 week). Zero regressions. The Observer pattern made it trivial to add webhook handlers for each processor without modifying core code.
What separates good from great: Show Strategy for interchangeable behaviour, Factory for creation, Module for encapsulation, Observer for decoupled events. Use real examples, not textbook UML.
8) How do you integrate TypeScript (migration, generics, strict mode) into a JavaScript codebase?
What interviewer evaluates: TypeScript adoption strategy, strict mode, generics.
Situation: A 200k-line JavaScript app had no types. Refactoring took 3x longer because there was no way to know what shape data had. A senior dev wanted to introduce TypeScript; the team was skeptical about a big-bang rewrite.
Task: Adopt TypeScript gradually with maximum impact and minimum disruption.
Action:
- Incremental enablement: Added
tsconfig.jsonwithallowJs: true,checkJs: false. Renamed new files to .ts and one high-impact module (auth) to .ts. Existing .js files were not type-checked. Over 6 months, migrated 30% of files. No "convert everything at once" — each PR could migrate one file. - Strict mode in stages: Started with
strict: false. EnablednoImplicitAnyfor new files only via@ts-checkin specific directories. Once a directory was fully migrated, turned onstrictNullChecksfor that dir. Full strict mode after 8 months. Documented the migration order: any → null checks → rest. - Generics for utilities: Built
useFetchhook with generic type param:useFetch(fetcher): useFetch(fetcher)returnsdata: T | nullso callers got full inference. For API clients:api.get(path): Promise<T>— used generic to infer response shape from path. Documented as "useFetch of type T" in comments to avoid angle brackets in strings. - Type definitions for legacy: Created
.d.tsfiles for untyped npm packages. For internal JS modules, added JSDoc@paramand@returnswith types — TypeScript used these for checking without converting to .ts. This provided value before the file was migrated.
Result: Refactoring time dropped 50% in migrated areas. The strict migration found 200+ real bugs (null access, wrong arg types). New features were 100% TypeScript. Zero downtime or blocked deployments.
What separates good from great: Show incremental migration, allowJs, and strict-in-stages. Explain generics for reusable utilities. Use JSDoc for gradual typing of legacy. Avoid big-bang rewrites.
Loading...
9) How do you handle CommonJS vs ESM, tree-shaking, and dynamic imports?
What interviewer evaluates: module system knowledge, build optimization.
Situation: A Node.js backend used a mix of require() and import. Some packages were ESM-only; others were CJS-only. The frontend bundle was 2.1 MB because entire libraries were included — tree-shaking was not working. Initial load took 8 seconds on 3G.
Task: Unify module usage and reduce bundle size by 50%.
Action:
- Node.js module strategy: Set
"type": "module"in package.json. Converted all files to ESM. For CJS-only packages, usedcreateRequirefrommoduleto import them. For ESM-only packages that had no CJS build, no workaround — they worked natively. Documented: new code is ESM only. - Tree-shaking audit: Analyzed bundle with
webpack-bundle-analyzer. Foundlodash(entire 70 KB) used for 3 functions. Replaced withlodash-esand named imports:import { debounce, throttle } from 'lodash-es'. Verified in analyzer: only those two functions in bundle. Added ESLintno-default-importfor lodash to preventimport _ from 'lodash'. - Dynamic imports: The admin dashboard had 15 heavy components (rich text editor, chart libs). Changed to
const Editor = lazy(() => import('./Editor')). Each route loaded its chunk on demand. Initial bundle dropped from 2.1 MB to 420 KB. The remaining 1.7 MB split into 15 chunks loaded on navigation. - Package.json exports: For internal packages in a monorepo, used
"exports"with conditional entries:{"import": "./dist/esm/index.js", "require": "./dist/cjs/index.js"}. Consumers got the appropriate format. ESM consumers received tree-shakeable code.
Result: Backend: single module system, no interop bugs. Frontend: bundle 980 KB (53% reduction). Time to Interactive on 3G: 3.2 seconds (from 8). Lighthouse performance score +22 points.
What separates good from great: Explain CJS vs ESM (require vs import, sync vs static). Show tree-shaking requirements (ESM, named imports, sideEffect: false). Use dynamic import for code splitting. Mention package.json exports for library authors.
10) How do you prevent XSS, CSRF, and prototype pollution in JavaScript applications?
What interviewer evaluates: security awareness, mitigation strategies.
Situation: A user-generated content platform allowed rich text and file names in the UI. A penetration test found: (1) stored XSS via unescaped user input in the DOM, (2) missing CSRF protection on state-changing API calls, (3) prototype pollution in a utility that merged user-provided config objects.
Task: Remediate all three vulnerability classes.
Action:
- XSS mitigation: (1) Never used
innerHTMLwith user data. For rich text, used a library (DOMPurify) that sanitized HTML before render, with an allowlist of tags (no script, no onerror). (2) Set Content-Security-Policy header:script-src 'self'; no inline scripts. (3) For dynamic attributes, usedsetAttributewith whitelisted names. (4) React's default escaping helped — butdangerouslySetInnerHTMLbypassed it; we banned that for user content. - CSRF mitigation: (1) Set
SameSite=Stricton cookies — cookies not sent on cross-site requests. (2) For APIs that could not use SameSite (legacy), added CSRF tokens: server generated token in a cookie, client sent it inX-CSRF-Tokenheader. Server validated token on state-changing requests. (3) VerifiedOriginandRefererheaders for additional check. - Prototype pollution mitigation: The merge utility used
Object.assignand recursive merge. Malicious payload{"__proto__": { "isAdmin": true }}could polluteObject.prototype. Fix: (1) Used a merge that skipped keys__proto__,constructor,prototype. (2) For user config, created target withObject.create(null)so it had no prototype. (3) Validated all keys with an allowlist before merge.
Result: Pen test passed. No XSS, CSRF, or prototype pollution findings. The CSP blocked 2 attempted XSS payloads in production (logged by report-uri). Config merge was hardened — fuzz testing found no new pollution vectors.
What separates good from great: Show output encoding and CSP for XSS, SameSite and tokens for CSRF, key validation for prototype pollution. Mention DOMPurify, avoid eval and innerHTML with user data. Explain the attack, then the fix.
Rapid-fire interview practice
60-second verbal answers. Practice out loud.
Round 1: Core Language (hoisting, this, WeakMap, Symbol)
Q: How does hoisting work for var, let, and const?
Situation: A developer got ReferenceError when accessing a variable before its declaration. Another got undefined in a similar case.
Task: Explain the difference.
Action: var is hoisted to function scope; it exists from the start of the function but is undefined until assignment — no TDZ (Temporal Dead Zone). let and const are hoisted to block scope but live in TDZ until the declaration line — access before that throws ReferenceError. This prevents accidental use before initialization.
Result: Team adopted let/const everywhere; var was banned. TDZ errors caught 3 real bugs where variables were used before async init completed.
Q: How do you explain the value of this in different contexts?
Situation: An event handler lost its this context when passed as a callback.
Task: Explain this binding rules.
Action: this is determined by call site: (1) default binding (global in sloppy mode, undefined in strict), (2) implicit (object method call), (3) explicit (call, apply, bind), (4) new (bound to the new object). Arrow functions inherit this from lexical scope — they don't have their own. For event handlers, used arrow functions or .bind(this) in class components. In modern React, prefer function components — no this.
Result: Documented the 4 rules on the wiki. New hires stopped making "this is undefined" mistakes.
Q: When would you use WeakMap instead of Map?
Situation: A cache keyed by DOM elements was causing memory leaks — elements could not be GC'd because the Map held strong references.
Task: Allow DOM nodes to be collected when no longer used.
Action: WeakMap keys are weakly held — when the key is garbage-collected, the entry is removed. Used WeakMap for: DOM → metadata, object → private data. Cannot iterate WeakMap; no size property. Keys must be objects (not primitives). Perfect for associating data with object lifecycle without extending the object.
Result: Switched to WeakMap; detached DOM count dropped to zero. Same pattern for component-instance → cache.
Q: What are Symbols and when do you use them?
Situation: Object property names from different modules collided; one overwrote the other.
Task: Create unique property keys that don't conflict.
Action: Symbol() creates a unique value — no two symbols are equal. Used as object keys for metadata, private-like properties (Symbol.for shared global, Symbol() unique per usage). Well-known symbols (Symbol.iterator, Symbol.toStringTag) customize built-in behaviour. Iteration protocols use Symbol.iterator.
Result: Used Symbol for internal React Fiber keys, dependency injection tokens, and plugin registration. Zero collision bugs.
Round 2: Async Patterns (generators, AbortController, Promise.allSettled)
Q: When would you use a generator function?
Situation: Needed to consume a large dataset in chunks without loading it all into memory.
Task: Stream processing with backpressure.
Action: Generator functions (function*) yield values one at a time. Caller controls when to pull next. Used for: paginated API consumption, parsing large files line-by-line, infinite sequences. Also power async generators (async function*) for async iteration with for await. Redux-Saga uses generators for testable async flow.
Result: Log processing pipeline handled 10 GB files with constant memory. Generators made the control flow explicit.
Q: How do you cancel fetch requests with AbortController?
Situation: User changed search query; old request completed after new one, overwriting results.
Task: Cancel in-flight requests when they become irrelevant.
Action: const ctrl = new AbortController(); pass ctrl.signal to fetch(url, {signal: ctrl.signal}). Call ctrl.abort() when the new request starts. fetch throws AbortError when aborted — catch and ignore. One controller per logical request; abort previous when starting new.
Result: No more stale response bugs. Same pattern for subscription cleanup, event listeners (addEventListener with AbortSignal).
Q: When do you use Promise.allSettled vs Promise.all?
Situation: Dashboard fetched 10 endpoints; one failure caused entire load to fail.
Task: Show partial data when some requests succeed.
Action: Promise.all rejects on first failure; you get all or nothing. Promise.allSettled always resolves with an array of {status, value?, reason?}. Process each: fulfilled → use value, rejected → show error for that item. Use allSettled when partial success is acceptable; use all when all must succeed.
Result: Dashboard resilient to single API failure. Users saw 9/10 sections; one showed retry. all for critical flows (e.g. checkout steps).
Round 3: Tooling (bundlers, linting, monorepos)
Q: How do you choose between Webpack, Vite, and esbuild?
Situation: Webpack dev server took 45 seconds to start. Team wanted faster iteration.
Task: Improve dev experience without breaking production build.
Action: Webpack: mature, full-featured, slower. Vite: uses esbuild for deps (pre-bundled), native ESM for app — instant HMR. esbuild: fastest, but less mature for complex config. Chose Vite: 45s → 2s cold start, HMR under 50ms. Production build still used Rollup (Vite's prod bundler) for tree-shaking. For library-only: esbuild for speed.
Result: Developer satisfaction up. CI build time similar. Vite config was 80% smaller than Webpack.
Q: How do you enforce code quality with ESLint?
Situation: PRs introduced bugs that static analysis could have caught. No consistent rules.
Task: Automate quality gates.
Action: ESLint with recommended + TypeScript plugin. Custom rules: no-console in prod, react-hooks/exhaustive-deps, @typescript-eslint/no-explicit-any (error for new code). Pre-commit (Husky + lint-staged) ran ESLint on staged files. CI failed the PR if lint failed. Gradually fixed legacy; new code had zero lint errors. Used --fix for auto-fixable rules.
Result: 80% of PRs passed lint first try. Caught 50+ bugs before merge in 6 months.
Q: How do you structure a JavaScript monorepo?
Situation: 5 apps shared utilities; copying code led to drift. No atomic cross-package changes.
Task: Single repo with shared packages and consistent tooling.
Action: Used pnpm workspaces (or npm/yarn workspaces). Structure: packages/ui, packages/utils, apps/web, apps/admin. Each package had its own package.json, build step. apps/* depended on packages/* via workspace protocol (workspace:*). Turborepo or Nx for task orchestration: ran build in dependency order, cached outputs. Single lockfile at root.
Result: Shared code had single source of truth. One PR could change util + both apps. Build cache cut CI time 40%.
Loading...
From real experience
"I've written JavaScript professionally for 8+ years — from jQuery spaghetti to TypeScript monorepos. The biggest gap I see in JavaScript interviews: candidates can recite closure definitions but can't explain why their production app leaked 500 MB. They know the event loop exists but can't debug why a setTimeout ran before a Promise. That bridge between theory and production debugging is what separates mid-level from senior."
"Master the fundamentals first: scope, closures, this, prototype chain, event loop. Then layer on: async patterns (allSettled, AbortController, generators), memory (heap snapshots, WeakMap), modules (ESM vs CJS, tree-shaking), and security (XSS, CSRF, prototype pollution). TypeScript is table stakes at 4+ years — know strict mode, generics, and migration strategy."
— Surya Singh, Full Stack Engineer with 8+ years across JavaScript, TypeScript, React, and Node.js
Common interview mistakes to avoid
- Saying "I know closures" without explaining the scope chain and when closures cause memory leaks.
- Not knowing microtask vs macrotask order — if you can't predict when code runs, expect pushback on async questions.
- Treating ES6 classes as "different" from prototypes — interviewers want to know it's syntactic sugar.
- Using
Promise.allwhen partial success is acceptable; not mentioningPromise.allSettled. - Ignoring
nullandundefinedwith optional chaining — then getting runtime errors. Know when to use??vs||. - Not mentioning heap snapshots or cleanup patterns when asked about memory — "just use WeakMap" without diagnosis is incomplete.
- Reciting design pattern names without a concrete example from your work.
- Claiming you know TypeScript but never having used strict mode or generics in production.
Frequently asked questions
What JavaScript topics are tested at the 4-year level?
Closures and scope chain, event loop (microtasks vs macrotasks), prototypal inheritance vs ES6 classes, async/await and error handling, ES6+ features in production, memory management and leak detection, design patterns (module, observer, factory, strategy), TypeScript integration and strict mode, CommonJS vs ESM and tree-shaking, and security (XSS, CSRF, prototype pollution).
How should I explain closures in an interview?
Focus on the scope chain: a closure is a function that retains access to variables from its lexical environment even after the outer function returns. Explain use cases: data privacy (module pattern), partial application, event handlers, and common pitfalls like closures in loops capturing the loop variable. Mention that closures can cause memory leaks if they hold references to large objects.
What is the difference between microtasks and macrotasks?
Macrotasks (setTimeout, setInterval, I/O, UI rendering) are queued in the task queue and processed one per event loop iteration. Microtasks (Promises, queueMicrotask, MutationObserver) run after the current synchronous code and before the next macrotask, and all microtasks in the queue run before the event loop continues. A microtask can schedule more microtasks which run before yielding.
How important is TypeScript for JavaScript interviews?
Increasingly critical at 4+ years. Interviewers expect you to explain strict mode benefits, typed interfaces for API responses, generics for reusable utilities, migration strategies from JavaScript, and the trade-offs of gradual vs strict adoption. You should be able to explain how TypeScript catches bugs at compile time and improves IDE support.
What memory management topics should I know?
Garbage collection basics (mark-and-sweep), reference counting limitations, closure-induced leaks (holding references in event listeners or timers), detached DOM nodes, WeakMap and WeakSet for avoiding memory retention, heap snapshot analysis in Chrome DevTools, and production strategies like sampling or error-boundary heap dumps.
How do CommonJS and ESM differ?
CommonJS uses require() (synchronous, runtime) and module.exports. ESM uses import/export (static, hoisted, analyzed at parse time). ESM supports tree-shaking, top-level await, and native browser support. Node.js can use both with different file extensions (.cjs vs .mjs) and package.json type field. Interop requires care with default vs named exports.
What JavaScript security vulnerabilities should I discuss?
XSS (injection of scripts via unsanitized user input — use CSP, encode output, avoid eval). CSRF (cross-site requests — use SameSite cookies, CSRF tokens). Prototype pollution (mutating Object.prototype — validate object keys, use Object.create(null)). Also mention Content-Security-Policy, Subresource Integrity, and avoiding dangerous APIs like innerHTML with user data.
What design patterns are commonly asked in JavaScript interviews?
Module pattern (IIFE with private scope), Observer/Publisher-Subscriber (EventEmitter, custom event bus), Factory (functions returning configured objects), Strategy (interchangeable algorithms passed as functions), Singleton (module exports), and Decorator (wrapping functions). Focus on when to use each and implementation in modern JavaScript.
Loading...
Related interview guides
- MERN Stack Interview Questions — STAR Format Guide
- MEAN Stack Interview Questions — STAR Format Guide
- SQL Interview Questions
- AI/ML Engineer Interview Questions — STAR Format Guide
Surya Singh
Azure Solutions Architect & AI Engineer
Microsoft-certified Azure Solutions Architect with 8+ years in enterprise software, cloud architecture, and AI/ML deployment. I build production AI systems and write about what actually works—based on shipping code, not theory.
- Microsoft Certified: Azure Solutions Architect Expert
- Built 20+ production AI/ML pipelines on Azure
- 8+ years in .NET, C#, and cloud-native architecture