As a performance engineer, you know that one goal is to diagnose the client's sensitivity and vulnerability to failure due to high-latency and limited bandwidth. And if you've read Steve Souders books, you know that behind every browser is a whole-lotta loading activity going on, activity which obeys a somewhat bizarre set of logical rules, depending on the browser type and version. As you are investigating a client-side performance issue, all that activity behind the client is rendering so quickly that it's very difficult to see each of the steps in the process. Why not try to slow it down?
Actually, you can. Using a network throttling tool like Shunra's vCat or Charles proxy - you can slow the back-end calls down and put the entire rendering sequence in slow-motion. It's like watching the instant replay from the 2012 Super Bowl of Mario Manningham's spectacular catch in the 4th quarter at the edge of the sideline. In a split second, he took 2 steps just within bounds, before being hit and pushed out. At normal speed you would have missed it.
What's important here is to try and correlate the visual sequence steps on the front end of the client with the activities happening behind the client -- like the number of active connections and the resource types in-flight. This technique can help to reveal the application's sensitivity or vulnerability to latency -- which may affect the functionality in the real world.
Three specific tips I have for you are:
- in a Charles proxy, you can switch to a window that shows Active Connections and the resources that are being downloaded
- with Shunra's vCat, in addition to the throttling behavior, there is more sophisticated reporting and deep dive into the client behavior
- and remember that the purpose of the network throttling is just for diagnosis, which not to be confused with real-world simulation.