JavaScript Essentials

The Event Loop & Asynchronous JavaScript

50 min Lesson 47 of 60

JavaScript is Single-Threaded

JavaScript is fundamentally a single-threaded programming language. This means it has only one call stack and can execute only one piece of code at a time. Unlike languages such as Java or C++ that support true multithreading, JavaScript processes instructions sequentially on a single thread. This design was an intentional choice when Brendan Eich created JavaScript in 1995 -- a single-threaded model avoids the complexities of concurrent access to the DOM, race conditions, and deadlocks that plague multithreaded environments. But if JavaScript can only do one thing at a time, how does it handle network requests, timers, and user interactions without freezing the browser? The answer lies in the event loop and the asynchronous runtime environment that surrounds the JavaScript engine.

The JavaScript Runtime Architecture

To understand asynchronous JavaScript, you must understand the components of the JavaScript runtime environment. The runtime consists of several cooperating pieces that work together to create the illusion of concurrency within a single-threaded model.

  • Call Stack -- The call stack is a LIFO (Last In, First Out) data structure that tracks which function is currently executing. When a function is called, a new frame is pushed onto the stack. When the function returns, its frame is popped off. JavaScript can only execute the function at the top of the stack.
  • Heap -- The heap is an unstructured region of memory where objects are allocated. When you create objects, arrays, or functions, they are stored in the heap. The call stack holds references (pointers) to these heap-allocated objects.
  • Web APIs (Browser) / C++ APIs (Node.js) -- These are APIs provided by the host environment, not by the JavaScript engine itself. They include setTimeout, setInterval, fetch, DOM event listeners, XMLHttpRequest, geolocation, and many more. When you call these APIs, the host environment handles the work on separate threads outside the JavaScript engine.
  • Callback Queue (Task Queue / Macrotask Queue) -- When a Web API completes its work (e.g., a timer expires or an HTTP response arrives), it places the associated callback function into the callback queue. This is also called the task queue or macrotask queue.
  • Microtask Queue -- A separate, higher-priority queue specifically for microtasks. Promise callbacks (.then(), .catch(), .finally()), queueMicrotask(), and MutationObserver callbacks all go into this queue.
  • Event Loop -- The event loop is the orchestrator. It continuously checks whether the call stack is empty. If it is, the event loop first drains all microtasks from the microtask queue, then picks the next macrotask from the callback queue and pushes it onto the call stack for execution.
Note: The JavaScript engine (like V8 in Chrome and Node.js, or SpiderMonkey in Firefox) only provides the call stack and the heap. Everything else -- the Web APIs, the queues, and the event loop -- is provided by the host environment (the browser or Node.js).

The Call Stack in Detail

The call stack is where JavaScript keeps track of function execution. Every time you call a function, a new execution context is created and pushed onto the stack. When the function finishes, its context is popped off. Let us trace through an example step by step.

Example: Tracing the Call Stack

function multiply(a, b) {
    return a * b;
}

function square(n) {
    return multiply(n, n);
}

function printSquare(n) {
    const result = square(n);
    console.log(result);
}

printSquare(4);

Here is the step-by-step call stack trace:

  1. printSquare(4) is called -- pushed onto the stack.
  2. Inside printSquare, square(4) is called -- pushed onto the stack.
  3. Inside square, multiply(4, 4) is called -- pushed onto the stack.
  4. multiply returns 16 -- popped off the stack.
  5. square returns 16 -- popped off the stack.
  6. console.log(16) is called inside printSquare -- pushed and popped.
  7. printSquare finishes -- popped off the stack. The stack is now empty.
Warning: If the call stack grows too large (e.g., infinite recursion), you get a "Maximum call stack size exceeded" error, commonly known as a stack overflow. The browser or Node.js imposes a limit on how deep the call stack can grow to protect against infinite loops consuming all available memory.

How Web APIs Enable Asynchronous Behavior

When you call an asynchronous function like setTimeout, the JavaScript engine does not wait for the timer to expire. Instead, it delegates the timer to the Web API layer (provided by the browser) and immediately moves on to the next line of code. The Web API runs the timer on a separate thread. When the timer expires, the Web API places your callback function into the callback queue. The event loop then picks it up when the call stack is empty.

Example: setTimeout and the Web API

console.log('Start');

setTimeout(function() {
    console.log('Timer callback');
}, 2000);

console.log('End');

// Output:
// Start
// End
// Timer callback (after ~2 seconds)

Here is exactly what happens:

  1. console.log('Start') executes immediately on the call stack. Output: Start.
  2. setTimeout is called. The JavaScript engine hands the callback and the 2000ms delay to the Web API. setTimeout itself returns immediately and is popped off the stack.
  3. console.log('End') executes immediately. Output: End.
  4. The call stack is now empty. The event loop waits.
  5. After approximately 2000ms, the Web API moves the callback to the callback queue.
  6. The event loop sees the stack is empty, picks the callback from the queue, pushes it onto the stack, and it executes. Output: Timer callback.

The Callback Queue (Macrotask Queue)

The callback queue, also known as the task queue or macrotask queue, holds callbacks from Web APIs that are ready to execute. These include callbacks from setTimeout, setInterval, setImmediate (Node.js), I/O operations, and UI rendering events. The event loop processes one macrotask at a time. After processing each macrotask, the event loop checks the microtask queue and drains it completely before picking the next macrotask.

Example: Multiple Timers in the Callback Queue

setTimeout(() => console.log('Timer 1'), 0);
setTimeout(() => console.log('Timer 2'), 0);
setTimeout(() => console.log('Timer 3'), 0);

console.log('Synchronous');

// Output:
// Synchronous
// Timer 1
// Timer 2
// Timer 3

Even though all three timers have a delay of 0 milliseconds, they do not execute immediately. They are registered with the Web API, which places them into the callback queue. The synchronous code runs first, then the event loop processes each queued callback in order.

The Microtask Queue

The microtask queue is a separate queue with higher priority than the macrotask queue. Microtasks include Promise callbacks (.then(), .catch(), .finally()), queueMicrotask() callbacks, and MutationObserver callbacks. The critical difference is that the event loop drains the entire microtask queue before processing the next macrotask. This means if microtasks keep adding more microtasks, they will all be processed before any macrotask gets a chance to run.

Example: Microtasks vs Macrotasks

console.log('Script start');

setTimeout(() => {
    console.log('setTimeout');
}, 0);

Promise.resolve()
    .then(() => {
        console.log('Promise 1');
    })
    .then(() => {
        console.log('Promise 2');
    });

queueMicrotask(() => {
    console.log('queueMicrotask');
});

console.log('Script end');

// Output:
// Script start
// Script end
// Promise 1
// queueMicrotask
// Promise 2
// setTimeout

Let us trace the execution step by step:

  1. console.log('Script start') -- executes synchronously. Output: Script start.
  2. setTimeout -- callback is sent to Web API, then placed in the macrotask queue.
  3. Promise.resolve().then(...) -- the first .then() callback is placed in the microtask queue.
  4. queueMicrotask(...) -- callback is placed in the microtask queue.
  5. console.log('Script end') -- executes synchronously. Output: Script end.
  6. The call stack is empty. The event loop drains the microtask queue:
  7. First microtask: Promise 1 is logged. This .then() returns, so the second .then() callback is added to the microtask queue.
  8. Next microtask: queueMicrotask is logged.
  9. Next microtask: Promise 2 is logged (this was added in step 7).
  10. Microtask queue is empty. Now the event loop picks the next macrotask: setTimeout callback runs. Output: setTimeout.
Pro Tip: The execution order is always: synchronous code first, then all microtasks (draining the queue completely, including any newly added microtasks), then one macrotask, then all microtasks again, and so on. Remember this as: sync -> microtasks -> macrotask -> microtasks -> macrotask -> ...

The Event Loop Algorithm

The event loop follows a precise algorithm on each iteration (called a "tick"):

  1. Execute the oldest macrotask from the macrotask queue (or the initial script itself).
  2. After the macrotask completes and the call stack is empty, process all microtasks in the microtask queue. If a microtask adds new microtasks, process those too -- drain the queue completely.
  3. If there are rendering opportunities (the browser needs to repaint), execute requestAnimationFrame callbacks, recalculate styles, update layout, and paint.
  4. Go back to step 1 and pick the next macrotask.

This cycle repeats indefinitely as long as the page is open. When there is nothing to do, the event loop effectively sleeps, waiting for new tasks to arrive.

setTimeout(fn, 0) -- Not Truly Zero Delay

A common misconception is that setTimeout(fn, 0) executes the callback immediately. It does not. The 0 millisecond delay means "add this to the macrotask queue as soon as possible," but the callback still has to wait for the current synchronous code to finish, all microtasks to be drained, and any preceding macrotasks to complete. In practice, browsers enforce a minimum delay of approximately 4 milliseconds for nested setTimeout calls (after the 5th nesting level), as defined in the HTML specification.

Example: setTimeout(0) Does Not Mean Immediate

const start = performance.now();

setTimeout(() => {
    const elapsed = performance.now() - start;
    console.log('setTimeout(0) ran after: ' + elapsed.toFixed(2) + 'ms');
}, 0);

// Simulate heavy synchronous work
let sum = 0;
for (let i = 0; i < 100000000; i++) {
    sum += i;
}

console.log('Heavy work done. Sum: ' + sum);

// Output:
// Heavy work done. Sum: 4999999950000000
// setTimeout(0) ran after: ~150ms (varies)

Even though the timer was set to 0ms, the callback waited over 150ms because the synchronous loop blocked the call stack. The event loop could not process the callback queue until the stack was empty.

Promise Resolution Timing

When a Promise resolves, its .then() callbacks are scheduled as microtasks. This means they execute before any macrotasks, even if those macrotasks were registered earlier. Understanding this ordering is essential for writing predictable asynchronous code and is a very common topic in JavaScript interviews.

Example: Promise vs setTimeout Ordering

setTimeout(() => console.log('1 - setTimeout'), 0);

new Promise((resolve) => {
    console.log('2 - Promise constructor');
    resolve();
}).then(() => {
    console.log('3 - Promise then');
});

console.log('4 - Synchronous');

// Output:
// 2 - Promise constructor
// 4 - Synchronous
// 3 - Promise then
// 1 - setTimeout
Note: The Promise constructor callback (the function passed to new Promise()) executes synchronously. Only the .then(), .catch(), and .finally() callbacks are scheduled as microtasks. This is a common source of confusion.

requestAnimationFrame Timing

The requestAnimationFrame (rAF) API schedules a callback to run before the next browser repaint, typically at 60 frames per second (every ~16.7ms). In the event loop model, rAF callbacks run after microtasks are drained and before the browser paints, but they are neither microtasks nor macrotasks -- they occupy a separate phase in the rendering pipeline. This makes rAF ideal for smooth visual animations.

Example: requestAnimationFrame in the Event Loop

console.log('Sync start');

requestAnimationFrame(() => {
    console.log('requestAnimationFrame');
});

setTimeout(() => {
    console.log('setTimeout');
}, 0);

Promise.resolve().then(() => {
    console.log('Promise microtask');
});

console.log('Sync end');

// Typical output:
// Sync start
// Sync end
// Promise microtask
// requestAnimationFrame  (may vary relative to setTimeout)
// setTimeout

The exact ordering of requestAnimationFrame relative to setTimeout(fn, 0) can vary between browsers and depends on when the rendering cycle occurs. However, microtasks always run before both. In general, rAF runs before rendering but the timing relative to macrotasks depends on the browser implementation.

Microtask Starvation

Because the event loop drains the entire microtask queue before moving to the next macrotask, it is possible for microtasks to "starve" macrotasks. If a microtask continuously schedules new microtasks, the macrotask queue will never be processed, and the browser will appear frozen because rendering is also blocked.

Example: Microtask Starvation (Dangerous -- Do NOT Run in Production)

// WARNING: This will freeze the browser tab!
function recursiveMicrotask() {
    Promise.resolve().then(() => {
        console.log('microtask');
        recursiveMicrotask(); // schedules another microtask
    });
}

recursiveMicrotask();

// The setTimeout below will NEVER execute
setTimeout(() => {
    console.log('This will never print');
}, 0);
Warning: Never create infinite microtask loops in production code. Unlike an infinite synchronous loop that triggers a stack overflow error, an infinite microtask loop will silently freeze the browser tab with no error message. The user must force-close the tab. Always ensure that your microtask chains have a termination condition.

Blocking the Event Loop

Long-running synchronous operations block the event loop entirely. While the call stack is occupied, no callbacks can be processed, no events can be handled, and the browser cannot repaint the screen. This is why computationally expensive synchronous code causes the page to become unresponsive -- the famous "frozen tab" experience.

Example: Blocking vs Non-Blocking

// BLOCKING: Freezes the UI for ~5 seconds
function blockingOperation() {
    const end = Date.now() + 5000;
    while (Date.now() < end) {
        // busy waiting -- blocks everything
    }
    console.log('Blocking done');
}

// NON-BLOCKING: Allows the event loop to breathe
function nonBlockingOperation(data, index = 0) {
    if (index >= data.length) {
        console.log('Non-blocking done');
        return;
    }

    // Process one chunk
    processChunk(data[index]);

    // Yield to the event loop, then continue
    setTimeout(() => {
        nonBlockingOperation(data, index + 1);
    }, 0);
}

The non-blocking version processes data in chunks, using setTimeout(fn, 0) to yield control back to the event loop between each chunk. This allows the browser to handle user events, repaint the screen, and process other callbacks between chunks. For heavy computation, consider using Web Workers which run on a separate thread entirely.

Visualizing the Event Loop with a Complete Example

Let us walk through a comprehensive example that combines all the concepts. This type of exercise is extremely common in JavaScript technical interviews.

Example: Complete Event Loop Visualization

console.log('1');

setTimeout(() => {
    console.log('2');
    Promise.resolve().then(() => {
        console.log('3');
    });
}, 0);

Promise.resolve().then(() => {
    console.log('4');
    setTimeout(() => {
        console.log('5');
    }, 0);
});

setTimeout(() => {
    console.log('6');
}, 0);

Promise.resolve().then(() => {
    console.log('7');
});

console.log('8');

// Output: 1, 8, 4, 7, 2, 3, 6, 5

Let us trace every step carefully:

  1. Synchronous phase: console.log('1') runs. Output: 1.
  2. First setTimeout callback goes to the macrotask queue. Call it Macro-A.
  3. First Promise.resolve().then() callback goes to the microtask queue. Call it Micro-1.
  4. Second setTimeout callback goes to the macrotask queue. Call it Macro-B.
  5. Second Promise.resolve().then() callback goes to the microtask queue. Call it Micro-2.
  6. console.log('8') runs. Output: 8.
  7. Drain microtask queue:
  8. Micro-1 runs: logs 4, schedules a setTimeout (call it Macro-C) into the macrotask queue.
  9. Micro-2 runs: logs 7.
  10. Microtask queue is empty. Pick next macrotask.
  11. Macro-A runs: logs 2, schedules a Promise .then() (call it Micro-3) into the microtask queue.
  12. Drain microtask queue: Micro-3 runs: logs 3.
  13. Pick next macrotask: Macro-B runs: logs 6.
  14. Drain microtask queue (empty). Pick next macrotask: Macro-C runs: logs 5.

Interview-Style Execution Order Puzzles

JavaScript interviews frequently test your understanding of the event loop. Here are two challenging puzzles with detailed explanations.

Puzzle 1: async/await and the Event Loop

async function asyncFunc() {
    console.log('A');
    const result = await Promise.resolve('B');
    console.log(result);
    console.log('C');
}

console.log('D');
asyncFunc();
console.log('E');

// Output: D, A, E, B, C

Here is why:

  1. console.log('D') runs synchronously. Output: D.
  2. asyncFunc() is called. Inside it, console.log('A') runs synchronously. Output: A.
  3. await Promise.resolve('B') pauses the async function. Everything after the await is scheduled as a microtask. Control returns to the caller.
  4. console.log('E') runs synchronously. Output: E.
  5. Call stack is empty. The microtask from the await runs: result is 'B', so console.log(result) outputs B, then console.log('C') outputs C.

Puzzle 2: Nested Promises and Timers

console.log('Start');

setTimeout(() => {
    console.log('Timeout 1');
    queueMicrotask(() => {
        console.log('Microtask inside Timeout 1');
    });
}, 0);

queueMicrotask(() => {
    console.log('Microtask 1');
    queueMicrotask(() => {
        console.log('Nested Microtask');
    });
});

Promise.resolve()
    .then(() => console.log('Promise 1'))
    .then(() => console.log('Promise 2'))
    .then(() => console.log('Promise 3'));

setTimeout(() => {
    console.log('Timeout 2');
}, 0);

console.log('End');

// Output:
// Start
// End
// Microtask 1
// Promise 1
// Nested Microtask
// Promise 2
// Promise 3
// Timeout 1
// Microtask inside Timeout 1
// Timeout 2

The key insight is that after each microtask, any newly added microtasks are also processed before moving to macrotasks. Microtask 1 runs and adds Nested Microtask to the queue. Promise 1 runs and adds Promise 2 to the queue. Then Nested Microtask and Promise 2 run, and Promise 2 adds Promise 3. All microtasks are fully drained before the first setTimeout callback runs.

Pro Tip: When solving event loop puzzles, use a three-column approach: write "Call Stack," "Microtask Queue," and "Macrotask Queue" as column headers. Step through the code line by line, noting what goes where. After each synchronous block or macrotask completes, drain all microtasks before picking the next macrotask. This systematic approach eliminates guesswork.

Practical Implications for Real-World Code

Understanding the event loop is not just academic. It directly impacts how you write production code. Here are key practical takeaways:

  • Never block the main thread with heavy synchronous computation. Use Web Workers for CPU-intensive tasks or break work into smaller chunks using setTimeout or requestIdleCallback.
  • Promise chains execute before timers. If you need something to happen after microtasks but before the next render, use requestAnimationFrame.
  • Use queueMicrotask() when you need to schedule something after the current synchronous code but before any macrotasks. This is useful for batching DOM reads and writes to avoid layout thrashing.
  • Async/await does not make code synchronous. It provides syntactic sugar over Promises. The code after await is still a microtask that executes asynchronously.
  • Event handlers are macrotasks. User clicks, keyboard events, and network responses are all processed as macrotasks. This is why a click handler can feel sluggish if the previous macrotask takes too long.

Example: Yielding to the Event Loop for Responsive UI

async function processLargeArray(items) {
    const CHUNK_SIZE = 1000;

    for (let i = 0; i < items.length; i += CHUNK_SIZE) {
        const chunk = items.slice(i, i + CHUNK_SIZE);

        // Process this chunk
        chunk.forEach(item => heavyComputation(item));

        // Yield to the event loop so the browser can repaint
        // and handle user events
        await new Promise(resolve => setTimeout(resolve, 0));

        // Update progress
        updateProgressBar((i + CHUNK_SIZE) / items.length * 100);
    }

    console.log('All items processed!');
}

This pattern uses await new Promise(resolve => setTimeout(resolve, 0)) to yield control back to the event loop between chunks. This allows the browser to repaint the progress bar, handle user interactions, and prevents the tab from becoming unresponsive during long-running operations.

Practice Exercise

Predict the output of the following code without running it. Write down your answer, then verify it in the browser console. Trace through each step using the three-column approach (Call Stack, Microtask Queue, Macrotask Queue).

console.log('A');

setTimeout(() => console.log('B'), 0);

Promise.resolve()
    .then(() => {
        console.log('C');
        setTimeout(() => console.log('D'), 0);
        return Promise.resolve();
    })
    .then(() => console.log('E'));

queueMicrotask(() => {
    console.log('F');
    queueMicrotask(() => console.log('G'));
});

setTimeout(() => {
    console.log('H');
    Promise.resolve().then(() => console.log('I'));
}, 0);

console.log('J');

After you have your prediction, run the code and compare. If your answer differed, go back through the event loop algorithm and trace each step carefully. Pay special attention to when new microtasks are added during microtask processing and how Promise chains create sequential microtasks.