Using Web Debugging Proxies

My previous two articles focused on debugging tools, so it's only fitting that I continue with this theme. When debugging front-end code, you tend to spend a lot of time reviewing how CSS and JavaScript affect your page's rendering; equally important is how network requests affect your site. In many cases, we work locally and forget that page size, latency, and script loading and blocking can greatly affect the way a user experiences your site. So having a good set of tools to inspect network traffic is vital to rounding out your debugging toolset.

Thankfully, all major modern browsers provide debugging tools that allow you to inspect network traffic, and 3rd party tools like Fiddler and Charles not only allow you to see network requests, but offer expanded capabilities to interact with your site.

We'll explore both types of tools.


Browser-Based Traffic Sniffing

As I mentioned, every major browser has built-in debugging tools. These include:

  • Internet Explorer's F12 Developer Tools
  • Firefox's Web Developer Tools and the Firebug add-on
  • Chrome's Developer Tools
  • Opera's Dragonfly
  • Safari's Web Inspector

Each set has their own unique capabilities, but each has the ability to collect network traffic. If we look at the following images, you can see that while the UIs may vary, the data collected and displayed is very similar:

The end result is a list of the browser's network requests involved with downloading our pages' assets or data. The networking tool is able to intercept these requests to show you important data such as:

Fiddler will take the request for your URI and replace it with a local file.

  • The type of request (GET, POST, etc.)
  • What's being requested
  • The URI
  • The status
  • The size
  • How long it took to fulfill the request

So if we look at the results from Firebug, we can see that the request pulled back the main page and its associated CSS and JavaScript files, including assets from Amazon's AWS. Due to image constraints, I can't show you everything it loaded, but there were image and Flash swf files that were returned as well.


Digging Deeper

By having this information, I can now drill-down into specific requests to determine if I'm receiving the proper data, or why I might have a long-running request. Suppose I look at the request for Webtrend's JavaScript file. It took 1.2 seconds to download, and I want to see how the request is being handled. I can expand the request and determine if it's being gzipped (it is):

and if it's been minified:

In this case, the file hasn't been minified, and I can follow-up with the developer to determine if it makes sense to do so. Granted, it's a 2K file, but every byte matters and this information allows me to better optimize my site.


Network Timing

Network latency can be a killer, especially for single-page apps that depend on external APIs or multiple script files for their capability. Most browsers try to asynchronously load assets when they can, but some, like JavaScript files, can cause blocking events. It's important to be able to pin those down in order to optimize resource loading as much as possible. If we look at this image, you can see that the file took 1.4 seconds to load:

By hovering over the timelines, we get a dialog that gives us a breakdown of how the request progressed:

Part of that was because it was blocked from loading for 760ms. If this turned out to be a pervasive issue, you could look into using a script loader like RequireJS to better manage script loading and dependencies.


Ajax Requests

Because dynamic apps are so pervasive, being able to inspect XHR calls is vital. Previously, you saw a ton of network requests, and trying to filter through all of them to find your XHR calls isn't efficient. Because of this, most tools let you choose which types of requests you'd like displayed. Here I'm filtering by XHR requests so I can evaluate the request and response:

By drilling down into the request, I can evaluate important details about the request, such as the headers and status, request method, cookies and most importantly, the response that was returned:

HTML was returned in thise case, but the response could be anything including text, JSON or XML. The great thing is that I'm able to inspect it fully in case I run into any trouble.


Cookies

Cookies are incredibly useful, and since we use them extensively, having an easy way to inspect their values makes life easier. Developer tools make it easy to do that by showing you which cookies were sent and received:

If you've ever done server-side development without client-side tools, you'll know why this is so awesome.

Overall, the great thing about this is that the capability is right in your browser, making it incredibly convenient to popup the debugger and check things out. Sometimes, though, you need a little more horsepower.


3rd Party HTTP Proxy Tools

HTTP Proxy applications like Fiddler and the Charles Web Debugging Proxy are the big brothers of browser-based network traffic sniffers. Not only can they intercept network requests from the browser but also other applications on your machine, making them much more versatile for debugging. They also tend to offer richer features such as:

  • Bandwith throttling
  • Autoresponders for specific requests
  • On-the-fly asset replacement (e.g: a JavaScript file)
  • SSL proxying
  • Plugin ecosystem
  • Customizable scripts
  • Recording and replay of testing scenarios

I extensively use the Windows-based, incredibly feature-rich Fiddler (it's freeware!). It's also used heavily inside of Microsoft because of it's robust feature-set. The developer of Fiddler, Eric Lawrence, previously worked at Microsoft and still maintains the application.

If we look at the UI, you'll see similarities in the output to what we saw in the browser tools. All of the network requests show up along with key information about the requests.

And by drilling into a request, I can see extensive detail about it, including the minified source of the jQuery library:

Much of that information can be pulled back via the browser-based tools, but what happens when you want to see if a specific library is blowing up your site? You can definitely swap out libraries and troubleshoot. A better route would be to build a Fiddler AutoResponder that intercepts your request and replaces the production library with one of your choice. Think about that for a second. Fiddler will take the request for your URI and replace it with a local file. Let's check it out.

First, I need to identify the URI I want to replace. In this case, I see that my blog's theme is running jQuery v1.2.6. That's insane, but before I drop it in and wreak havoc on my site, I'd like to see if jQuery v1.8.3 works as expected.

I click on the entry for jQuery v1.2.6. In the right column of Fiddler, I select the tab "AutoResponder" and check "Enable automatic responses". Kicking off the responder is as simple as dragging the URI into the rule editor. You'll notice that it starts the rule by comparing the URI. If it matches, it'll respond with an event of your choice.

Since I want to test out jQuery 1.8.3, I want the rule to swap out the production version with a local copy of jQuery that I have on my computer.

I save the rule and re-load my page. The end result is that while the URI might look the same, inspecting the results verifies that jQuery v1.8.3 was in fact injected, allowing me to test this on the fly without making any changes to the site:

From a debugging perspective, I can't stress how useful this is, especially when you're trying to nail down a bug in older versions of a framework or library.

My good friend Jonathan Sampson made a great screencast on using this feature.

>

Add-on Ecosystem

Network latency can be a killer, especially for single-page apps.

Fiddler benefits from an extensive add-on ecosystem that extends its functionality via the iFiddlerExtension Interface. There are add-ons that allow:

  • Stress testing
  • Security auditing
  • Traffic diffing to compare two traffic profiles
  • JavaScript formatting

By itself, Fiddler has a TON of features - too many to describe in this post. That's why there's a 330 page book on how to take full advantage of it. It's only $10 and will help you learn the ins-and-out of this great tool.


OSX and Linux

If you're on OSX or Linux, the best option is the Charles Web Debugging Proxy. It's a great and well-supported app, and while commercial, it is worth every penny. I've looked for good alternatives that focused on web development, and Charles really stood out.

The interface is similar to Fiddler, but it offers two different ways of looking at network traffic:

The style is entirely up to you. I tend to lean towards the structured view because it feels a little more organized, but it is a little more work to find out where a specific URI is.

Like Fiddler, Charles also offers an autoresponder capability. It's called "Map Local...", and you get to it by right-mouse clicking on a specific URI. This allows you to choose a local file to work with.
When I reload the page, I'll now have jQuery v1.2.6 replaced by the local copy of jQuery v1.9 that was on my computer.

Another great feature of Charles is the ability to throttle your network requests to simulate specific bandwidth speeds. I remember the days of 56k modems and their blazing speeds, so using this brings back fond memories (um, right):

Charles can also work on Windows since it offers a complete cross-platform UI.


Which Tool to Use

I use all of these tools all of the time because I test on every major browser. Having this capability really makes troubleshooting easier. Naturally, choosing whether to use a browser-based sniffer or a hard-core app-based proxy depends entirely on your debugging needs.

If you just have to inspect some traffic and check results, a browser-based sniffer will most likely suit you perfectly.

On the other hand, if you need granular control of how URIs respond or want the flexibility to create custom test scripts, then a tool like Fiddler or Charles is where you need to go. The great thing is that we have solid choices to help us do this, especially as the complexity of our projects increase.

Tags:

Comments

Related Articles