Building Your First Web Scraper, Part 3

Welcome back to this series on building a web scraper. In this tutorial, I'll go through an example of scraping data from my own podcast site. I'll cover in detail how I extracted the data, how the helper and utility methods accomplish their jobs, and how all the puzzle pieces come together.

Topics

  • Scraping My Podcast
  • Pry
  • Scraper
  • Helper Methods
  • Writing Posts

Scraping My Podcast

Let’s put what we’ve learned so far into practice. For various reasons, a redesign for my podcast Between | Screens was long overdue. There were issues that made me scream when I woke up in the morning. So I decided to set up a whole new static site, built with Middleman and hosted with GitHub Pages.

I invested a good amount of time on the new design after I tweaked a Middleman blog to my needs. All that was left to do was to import the content from my database-backed Sinatra app, so I needed to scrape the existing content and transfer it into my new static site.

Doing this by hand in schmuck fashion was not on the table—not even a question—since I could rely on my friends Nokogiri and Mechanize to do the job for me. What is ahead of you is a reasonably small scrape job that is not too complicated but offers a few interesting twists that should be educational for the web scraping newbies out there.

Below are two screenshots from my podcast. 

Screenshot Old Podcast

A screenshot of an old podcast

Screenshot New Podcast

A screenshot of a new podcast

Let’s break down what we want to accomplish. We want to extract the following data from 139 episodes that are spread over 21 paginated index sites:

  • the title
  • the interviewee
  • the subheader with the topic list
  • the SoundCloud track number for each episode
  • the date
  • the episode number
  • the text from the show notes
  • the links from the show notes

We iterate through the pagination and let Mechanize click every link for an episode. On the following detail page, we will find all the information from above that we need. Using that scraped data, we want to populate the front matter and “body” of the markdown files for each episode.

Below can you see a preview of how we will compose the new markdown files with the content we extracted. I think this will give you a good idea about the scope ahead of us. This represents the final step in our little script. Don’t worry, we will go over it in more detail. 

def compose_markdown

I also wanted to add a few tricks that the old site couldn’t play. Having a customized, comprehensive tagging system in place was crucial for me. I wanted listeners to have a deep discovery tool. Therefore, I needed tags for every interviewee and split the subheader into tags as well. Since I produced 139 episodes in the first season alone, I had to prep the site for a time when the amount of content becomes harder to comb through. A deep tagging system with intelligently placed recommendations was the way to go. This allowed me to keep the site lightweight and fast.

Let’s have a look at the complete code for scraping the content of my site. Look around and try to figure out the big picture of what’s going on. Since I expect you to be on the beginner side of things, I stayed away from abstracting too much and erred on the side of clarity. I did a couple of refactorings that were targeted at aiding code clarity, but I also left a little bit of meat on the bone for you to play with when you are finished with this article. After all, quality learning happens when you go beyond reading and toy around with some code on your own.

Along the way, I highly encourage you to start thinking about how you can improve the code in front of you. This will be your final task at the end of this article. A little hint from me: breaking up large methods into smaller ones is always a good starting point. Once you understand how the code works, you should have a fun time honing in on that refactoring. 

I already started by extracting a bunch of methods into small, focused helpers. You should easily be able to apply what you have learned from my previous articles about code smells and their refactorings. If this still goes over your head right now, don’t worry—we’ve all been there. Just keep at it, and at some point things will start to click faster.

Full Code

Why didn’t we require "Nokogiri"? Mechanize provides us with all our scraping needs. As we discussed in the previous article, Mechanize builds on top of Nokogiri and allows us to extract content as well. It was, however, important to cover that gem in the first article since we needed to build on top of it.

Pry

First things first. Before we jump into our code here, I thought it was necessary to show you how you can efficiently check if your code works as expected every step of the way. As you have certainly noticed, I have added another tool to the mix. Among other things, Pry is really handy for debugging. 

If you place Pry.start(binding) anywhere in your code, you can inspect your application at exactly that point. You can pry into the objects at specific points in the application. This is really helpful to take your application step by step without tripping over your own feet. For example, let’s place it right after our write_page function and check if link is what we expect.

Pry

If you run the script, we will get something like this.

Output

When we then ask for the link object, we can check if we are on the right track before we move on to other implementation details.

Terminal

Looks like what we need. Great, we can move on. Doing this step by step through the whole application is an important practice to ensure that you don't get lost and you really understand how it works. I won’t cover Pry here in any more detail since it would take me at least another full article to do so. I can only recommend using it as an alternative to the standard IRB shell. Back to our main task.

Scraper

Now that you've had a chance to familiarize yourself with the puzzle pieces in place, I recommend we go over them one by one and clarify a few interesting points here and there. Let’s start with the central pieces.

podcast_scraper.rb

What happens in the scrape method? First of all, I loop over every index page in the old podcast. I’m using the old URL from the Heroku app since the new site is already online at betweenscreens.fm. I had 20 pages of episodes that I needed to loop over. 

I delimited the loop via the link_range variable, which I updated with each loop. Going through the pagination was as simple as using this variable in the URL of each page. Simple and effective.

def scrape

Then, whenever I got a new page with another eight episodes to scrape, I use page.links to identify the links we want to click and follow to the detail page for each episode. I decided to go with a range of links (links[2..8]) since it was consistent on every page. It was also the easiest way to target the links I need from each index page. No need to fumble around with CSS selectors here.

We then feed that link for the detail page to the write_page method. This is where most of the work is done. We take that link, click it, and follow it to the detail page where we can start to extract its data. On that page we find all the information that I need to compose my new markdown episodes for the new site. 

def write_page

def extract_data

As you can see above, we take that detail_page and apply a bunch of extraction methods on it. We extract the interviewee, title, sc_id, text, episode_title, and episode_number. I refactored a bunch of focused helper methods that are in charge of these extraction responsibilities. Let’s have a quick look at them:

Helper Methods 

Extraction Methods

I extracted these helpers because it enabled me to have smaller methods overall. Encapsulating their behaviour was also important. The code reads better as well. Most of them take the detail_page as an argument and extract some specific data we need for our Middleman posts.

We search the page for a specific selector and get the text without unnecessary white space.

We take the title and remove ? and # since these don’t play nicely with the front matter in the posts for our episodes. More about front matter below.

Here we needed to work a little harder to extract the SoundCloud id for our hosted tracks. First we need the Mechanize iframes with the href of soundcloud.com and make it a string for scanning...

Then match a regular expression for its digits of the track id—our soundcloud_id "221003494".

Extracting show notes is again quite straightforward. We only need to look for the show notes’ paragraphs in the detail page. No surprises here. 

The same goes for the subtitle, except that it is just a preparation to cleanly extract the episode number from it.

Here we need another round of regular expression. Let’s have a look before and after we applied the regex.

episode_subtitle

number

One more step until we have a clean number.

We take that number with a hash # and remove it. Voilà, we have our first episode number 139 extracted as well. I suggest we look at the other utility methods as well before we bring it all together.

Utility Methods

After all that extraction business, we have some cleanup to do. We can already start to prepare the data for composing the markdown. For example, I slice the episode_subtitle some more to get a clean date and build the tags with the build_tags method.  

def clean_date

We run another regex that looks for dates like this: "  Aug 26, 2015". As you can see, this is not very helpful yet. From the string_date we get from the subtitle, we need to create a real Date object. Otherwise it would be useless for creating Middleman posts.

string_date

Therefore we take that string and do a Date.parse. The result looks a lot more promising.

Date

def build_tags

This takes the title and interviewee we have built up inside the extract_data method and removes all pipe characters and junk. We replace pipe characters with a comma, @, ?, #, and & with an empty string, and finally take care of abbreviations for with.

def strip_pipes

In the end we include the interviewee name in the tag list as well, and separate each tag with a comma.

Before

After

Each of these tags will end up being a link to a collection of posts for that topic. All of this happened inside the extract_data method. Let’s have another look at where we are:

def extract_data

All that is left to do here is return an options hash with the data we extracted. We can feed this hash into the compose_markdown method, which gets our data ready for being written out as a file that I need for my new site.

Writing Posts

def compose_markdown

For publishing podcast episodes on my Middleman site, I opted to repurpose its blogging system. Instead of creating “pure” blog posts, I create show notes for my episodes that display the SoundCloud hosted episodes via iframes. On index sites, I only display that iframe plus title and stuff.  

The format I need for this to work is composed of something called front matter. This is basically a key/value store for my static sites. It is replacing my database needs from my old Sinatra site. 

Data like interviewee name, date, SoundCloud track id, episode number and so on goes in between three dashes (---) on top of our episode files. Below comes the content for each episode—stuff like questions, links, sponsor stuff, etc.

Front Matter

In the compose_markdown method, I use a HEREDOC to compose that file with its front matter for each episode we loop through. From the options hash we feed this method, we extract all the data that we collected in the extract_data helper method.

def compose_markdown

This is the blueprint for a new podcast episode right there. This is what we came for. Perhaps you are wondering about this particular syntax: #{options[:interviewee]}. I interpolate as usual with strings, but since I’m already inside a <<-HEREDOC, I can leave the double quotes off.

Just to orientate ourselves, we're still in the loop, inside the write_page function for each clicked link to a detail page with the show notes of a singular episode. What happens next is preparing to write this blueprint to the file system. In other words, we create the actual episode by providing a file name and the composed markdown_text

For that final step, we just need to prepare the following ingredients: the date, the interviewee name, and the episode number. Plus the markdown_text of course, which we just got from compose_markdown.

def write_page

Then we only need to take the file_name and the markdown_text and write the file.

def write_page

Let’s break this down as well. For each post, I need a specific format: something like 2016-10-25-Avdi-Grimm-120. I want to write out files that start with the date and include the interviewee name and the episode number.

To match the format Middleman expects for new posts, I needed to take the interviewee name and put it through my helper method to dasherize the name for me, from Avdi Grimm to Avdi-Grimm. Nothing magic, but worth a look:

def dasherize

It removes whitespace from the text we scraped for the interviewee name and replaces the white space between Avdi and Grimm with a dash. The rest of the filename is dashed together in the string itself: "date-interviewee-name-episodenumber".

def write_page

Since the extracted content comes straight from an HTML site, I can’t simply use .md or .markdown as the filename extension. I decided to go with .html.erb.md. For future episodes that I compose without scraping, I can leave off the .html.erb part and only need .md.

After this step, the loop in the scrape function ends, and we should have a single episode that looks like this:

2014-12-01-Avdi-Grimm-1.html.erb.md

This scraper would start at the last episode, of course, and loops until the first. For demonstration purposes, episode 01 is as good as any. You can see on top the front matter with the data we extracted. 

All of that was previously locked in the database of my Sinatra app—episode number, date, interviewee name, and so on. Now we have it prepared to be part of my new static Middleman site. Everything below the two triple dashes (---) is the text from the show notes: questions and links mostly.

Final Thoughts

And we are done. My new podcast is already up and running. I’m really glad I took the time to redesign the thing from the ground up. It’s a lot cooler to publish new episodes now. Discovering new content should be smoother for users as well. 

As I mentioned earlier, this is the time where you should go into your code editor to have some fun. Take this code and wrestle with it a bit. Try to find ways to make it simpler. There are a few opportunities to refactor the code.

Overall, I hope this little example gave you a good idea of what you can do with your new web scraping chops. Of course you can achieve much more sophisticated challenges—I’m sure there are even plenty of small business opportunities to be made with these skills. 

But as always, take it one step at a time, and don’t get too frustrated if things don’t click right away. This is not only normal for most people but to be expected. It’s part of the journey. Happy scraping!

Tags:

Comments

Related Articles