Measuring Your Code Performance

Joris Verbogt
Joris Verbogt
Jun 9 2023
Posted in Engineering & Technology

Using Node Performance Hooks

Measuring Your Code Performance

Finding performance bottlenecks in your code is one of the most challenging parts in software engineering. When your code gets more complex, finding those performance bottlenecks becomes even more challenging.

Simple Approach

A simple way of measuring the time a function takes to finish, would be something like:

import { hrtime } from 'node:process'

const NS_PER_SEC = 1e9
const time = hrtime()

await longRunningMethod()
const diff = hrtime(time)

console.log(`longRunningMethod took ${diff[0] * NS_PER_SEC + diff[1]} nanoseconds`)

The downside of this approach is that, first of all, you have to deal with a quirky hrtime() datatype, but more important, it is very difficult to keep track between different parts of your code. You will need to repeat this kind of structure all over your code base, making it hard to maintain.

Performance API

The W3C proposed a Web Performance Timing API in 2020, to provide browser vendors with a standard way of reporting performance of request timelines and code execution.

The NodeJS team decided to align their existing performance-related modules with this standard, leading to the perf_hooks module implementing a subset of that standard, adapted to the specific context of Node applications.

We're going to take a look at some examples of how you could use these methods to your advantage.

High-Resolution Time

Having a unified, nanosecond-level, numeric timestamp is a good starting point for doing measurements. Luckily, it is there:

import { performance } from 'node:perf_hooks'

const time = performance.now() // returns milliseconds since the start of the node process
await longRunningMethod()
const diff = performance.now() - time

console.log(`longRunningMethod took ${diff} milliseconds`);

No more quirky hrtime()!

Performance Observers

Instead of having to keep track of variables for timestamps and (console) logging all over your code, the perf_hooks module gives you a W3C-compliant PerformanceObserver:

import { PerformanceObserver } from 'node:perf_hooks'

const performanceObserver = new PerformanceObserver((items) => {
  items.getEntries().forEach((entry) => {
    console.log(entry)
  })
})
performanceObserver.observe({ entryTypes: ['measure', 'function'], buffer: false }) // observe user timings as well as function timings

Marked Measurements

For User Timing, we can use markers to define interesting points in execution:

import { performance } from 'node:perf_hooks'

performance.mark('method-start')
await longRunningMethod('test')
performance.mark('method-end')

performance.measure('method', 'method-start', 'method-end')

Our performanceObserver will log the measured entry:

PerformanceMeasure {
  name: 'method',
  entryType: 'measure',
  startTime: 2390.4642089903355,
  duration: 2295.015916004777,
  detail: null
}

This way, the marking, measuring and logging can happen at different and more centralized places in your code.

Timerify

Although this is certainly an improvement over our simple recipe from the beginning of this post, we still need to add markers and measurements. It would be nice if we could just somehow measure a function call and observe the timing.

Luckily, there is the timerify method, that can wrap any function, synchronous or asynchronous:

import { performance } from 'node:perf_hooks'

await performance.timerify(longRunningMethod)('test')

And, since we are also observing function performance entries, our performanceObserver will log the entry:

PerformanceMeasure {
  name: 'longRunningMethod',
  entryType: 'function',
  startTime: 2390.4642089903355,
  duration: 2295.015916004777,
  detail: [ 'test' ]
}

If the function is executed multiple times, we can even let timerify add entries to a histogram for later analysis:

import { performance, createHistogram } from 'node:perf_hooks'

const histogram = createHistogram()

for (let i=0; i < 100; i++) {
  await performance.timerify(longRunningMethod)('test')
}

console.log(histogram)

Which will not only give us all entries in our observer, but also a statistical insight in timing distribution:

Histogram {
  min: 100204544,
  max: 109248511,
  mean: 101852774.4,
  exceeds: 0,
  stddev: 1432898.5932139647,
  count: 100,
  percentiles: SafeMap(9) [Map] {
    0 => 100204544,
    50 => 101515264,
    75 => 101777408,
    87.5 => 102432768,
    93.75 => 102825984,
    96.875 => 105906176,
    98.4375 => 109051904,
    99.21875 => 109182976,
    100 => 109182976
  }
}

Conclusion

As the Javascript (and NodeJS) ecosystem evolves, new APIs enter the game all the time. The Performance Timing API is a useful one that can help you in your quest for faster code without having to rely on self-made solutions.

As always, if you have any comments or questions, we are available via our Support Channel.

Keep up-to-date with the latest news