Essential Command-Line Tools for Web Developers

Tools can make our workflows feel seamless, allowing us to focus on what we are building, and not worry about the process. Most web developers, on all portions of the stack, work from the command-line. There are countless utilities, which can make you more productive. These aren't full blown command-line applications, such as Git, but rather simple and composable tools, which can improve your workflow as a web developer.

If you are on Mac OS X, you can use Homebrew to install any of the tools that are not part of the standard packages. Each of these tools have pages worth of options, and only a fraction of them are reviewed below. If you'd like to learn more about each tool, begin in the man pages with man <command> or <command> -h.

Getting familiar with reading the supplied documentation, rather than Googling for an answer, is an essential skill for a productive developer.


cURL

cURL is a venerable Swiss-Army knife for hitting URLs and receiving data in return.

cURL is used to "transfer a URL" according to the man page. It's a venerable Swiss-Army knife for hitting URLs and receiving data in return. From simply returning the body of a page, such as your external IP address from ifconfig.me, to downloading a file, to viewing the headers of a page. But cURL isn't just about pulling data, you can use it to push data to URLs to submit forms or APIs. Every developer working on the web should know this tool.

The basic usage of cURL is to download the contents of a website. Here's an example:

You may want to download a file from a site, in which case you'll use the -O flag.

Notice that we also used the -L flag, which tells cURL to follow any redirects (which we need to download VirtualBox).

Often, when configuring web servers, you'll want to see that the headers are being set properly (e.g., Cache-Control and Etags). You can view just the headers of a page with the -I flag:

Setting request headers is fairly common when working with URLs from the command-line. Many times, you'll need to define an Accept Header to set the response type, or Authorization header for setting your credentials. You can also set custom headers, if required by your API.

In addition to setting an API key header with -H, we've used -i to return both the header information, as well as the response body.

cURL isn't just for pulling down URLs, you can also use it to make POST/PUT requests to submit data to an API or act as an HTML form.

The beauty of cURL is that it can be piped into other programs to work with the returned data.

We've broken the command down to multiple lines, using \ to make it easier to read. First, we need to set the proper headers with -H; you can see that we've set multiple headers by using multiple -H. Then, we set the JSON data for the POST using -d.

We've only scratched the surface of the power of cURL, and you should get very familiar with these basic commands. After a while, you may even find yourself browsing the web more often through cURL. The beauty of cURL, as with many UNIX utilities, is that it can be piped into other programs (like grep or jq) to work with the returned data.


jq

jq is like sed for JSON.

If you are working with a lot of JSON APIs (who isn't these days?), then you'll want to become familiar with jq. Described as a lightweight and flexible command-line JSON parser, jq is like sed for JSON. You aren't limited to using jq with APIs though; it can parse any JSON document (maybe from your favorite NoSQL database like Riak or Couchbase). The most common use case would be to pipe the results of a cURL request into jq, and you've got a powerful combination for working with JSON APIs.

First, start with retrieving a JSON document; in this case, we're pulling tweets as JSON from Twitter's API, and piping it into jq. We'll return the entire response with . (period).

The JSON response includes meta-data about the request, as well as the results of the query, stored in the results[] array (which I truncated in the above response). We can pull out just the first tweet with:

The tweet contains a fair bit of information that might not be pertinent to our current project, so we can show only the individual fields we're interested in:

You can collect complex results into usable formats by surrounding the filter in []. For example, we might want to return all image URLs found in our tweets into the media array. Twitter's API returns any included media information within the entities field, so we can return the URLs, like so:

That's just the start of what jq is capable of. Refer to the full documentation to unlock its serious power. Once you get the hang of folding up responses into a desired result, you'll be able to work with and transform data like a machine.


ngrep

Ngrep, or network grep, is exactly what it sounds like: grep for network traffic.

Ngrep, or network grep, is exactly what it sounds like: grep for network traffic. It allows you to use a regular expression to match network packets, and it will return very similar information to what you would get from curl -I. The basic usage can be helpful to see all the requests a page is making, but that's just the start. When working with rich JavaScript client applications that make countless AJAX requests, which can be difficult to monitor and debug, ngrep will be your new best friend.

This command will show all GET and POST requests made on a non-default interface with -d en1. The -W byline will maintain the line breaks for better readability, and -q will give you less noise about non-matching packets. You can use the host and port parameters to isolate the traffic to the specific application you are working with, if you need to.

ngrep is great for viewing network traffic, and can bring to light what type of data is being passed between your computer and various sites. For example, it wouldn't take long to find a site sending your password in plaintext in the POST request for a login form.


S3cmd

S3cmd gives you command-line access to your buckets and files on S3, plus much more.

Nearly every developer stores files in Amazon S3 at this point. You might be using it for simple storage of DB backups, or using it as the backing to your CDN, or even serving an entire static site from it. While the name stands for Simple Storage Service, working with the admin panel can be anything but simple. Besides, why would you want to leave the command-line to use an interface for a file system? S3cmd gives you command-line access to your buckets and files on S3, plus much more.

After configuring the tool, which entails entering access keys from the AWS console, you will be able to work with S3 much in the same way that you would your local filesystem.

Make a bucket:

List your buckets:

Put a file into the bucket:

List the contents of the bucket:

Download a file from the bucket:

Remove a bucket and its content:

We've just previewed the file system commands, but that is only the start for S3cmd. You can also use it to manage your access control list, as well as your CloudFront distribution points from the command-line!


Localtunnel

You'll never have to mess with manually tunneling traffic again.

Localtunnel is a project, by Jeff Lindsay and sponsored by Twilio, that makes it dead simple to expose your local web server to the Internet. Localtunnel is one tool that takes the UNIX philosophy to heart: it does one thing and does it well. The only option is to upload a public key for authentication, but that only needs to be done once.

Localtunnel is a RubyGem, so you'll need Ruby and RubyGems installed. A simple gem install localtunnel will get you started. Then, to expose a locally running server and port, you simply pass the port you want to expose as an argument.

Your local server can then be accessed by anyone, anywhere. It's great for sharing work in progress, and perfect for accessing your application on a mobile device. You'll never have to mess with manually tunneling traffic again. Localtunnel solves a simple, but painful problem; it's the ideal tool for a web developer.


Just Getting Started

There are numerous other tools, which are central to a web developer's life (load testing, network monitoring, etc.), and they vary wildly, depending on what portion of the stack you're working in. You might want to visit Command-line Fu or follow Command-line Magic on Twitter to discover new tools. And, of course, you should definitely try New Relic, since no web developer should be without it.

Tags:

Comments

Related Articles