Latency: What you can control on the front end
Being able to observe where latency occurs in every layer of your application’s stack is critical to understanding its performance, and that can be especially challenging when you’re building your own open source observability solution to maintain your front end.
You have less of a latency budget than you think. Measuring the delays caused by operations like network requests, script loading, DOM rendering, painting, and more are necessary –– but they do not tell the whole performance story. If you step beyond what you can see in a browser dev tools audit and start scoping the user experience at the human level, you’ll find that there are more latency-adding elements that comprise the time it takes for users to experience your app: human, hardware, software, and environmental.
All of these affects coalesce to create your app’s performance issues, and those are felt most by your users. Starting from their perspective first as the entry point into your app’s performance pipeline, some of your latency calculations might begin at their earliest measurable interface (like the initial load request), and then end with an estimation of the average time it took for a human to interpret the corresponding result on screen (like the Largest Contentful Paint + the common median human reaction time).
Knowing what delays your users are actually experiencing, and where, gives you actionable views into why your app’s performance gets hung up. By continuously observing what happens in the layers you can control: from the user session, down to seeing what happens in the back end when their requests are made –– your front end team can quantify the issues that matter most to your bottom line, and keep your users from bouncing by keeping these processes in check. This in turn keeps users around (and your businesses growing), since they won’t inherit the performance issues that were already addressed, not released.