Saturday, October 25, 2014

Tools for checking broken web links - part 2

Part 1 of this 2-part series on Linux link checking tools reviewed the tool linkchecker. This post concludes the series by presenting another tool, klinkstatus.

Unlike linkchecker which has a command-line interface, klinkstatus is only available as a GUI tool. Installing klinkstatus on Debian/Ubuntu systems is as easy as:

$ sudo apt-get install klinkstatus

After installation, I could not locate klinkstatus in the GNOME menu system. No problem. To run the program, simply execute the klinkstatus command in a terminal window.

For an initial exploratory test run, simply specify the starting URL for link checking in the top part of the screen (e.g., http://linuxcommando.blogspot.ca), and click the Start Search button.

You can pause link checking by clicking the Pause Search button, and review the latest results. To resume, click Pause Search again; to stop, Stop Search.

Now that you have reviewed the initial results, you can customize subsequent checks in order to constrain the amount of output that you need to manually analyze and address afterward.

The program's user interface is very well designed. You can specify the common parameters right on the main screen. For instance, after exploratory testing, I want to prevent link checking for certain domains. To do that, enter the domain names in the Do not check regular expression field. Use the OR operator (the vertical bar '|') to separate multiple domains, e.g., google.com|blogger.com|digg.com.

To customize a parameter that is not exposed on the main screen, click Settings, and then Configure KLinkStatus. There, you will find more parameters such as the number of simultaneous connections (threads) and the timeout threshold.

The link checking output is by default arranged in a tree view with the broken links highlighted in red. The tree structure allows you to quickly determine the location of the broken link with respect to your website.

You may choose to recheck a broken link to determine if the problem is a temporary one. Right click the link in the result pane and select Recheck.

Note that right clicking a link brings up other options such as Open URL and Open Referrer URL. With these options, you can quickly view the context of the broken link. This feature would be very useful if it worked. Unfortunately, clicking either option fails with the error message: Unable to run the command specified. The file or folder http://.... does not exist. This turns out to be an unresolved linkchecker bug. A workaround is to first click Copy URL (or Copy Referrer URL) from the right click menu, and then paste it into a web browser to open it manually.

The link checking output can be exported to a HTML file. Click File, then Export to HTML, and select whether to include All or just the Broken links.

Below is a final note to my fellow non-US bloggers (I'm blogging from Canada).

If I enter linuxcommando.blogspot.com as the starting URL, the search is immediately redirected to linuxcommando.blogspot.ca, and stops there. To klinkstatus, blogspot.com and blogspot.ca are 2 different domains, and when the search reaches an "external" domain (blogspot.ca), it is programmed to not follow links from there. To correct the problem, I specify linuxcommando.blogspot.ca instead as the starting URL.

Monday, October 20, 2014

Tools for checking broken web links - part 1

With a growing web site, it becomes almost impossible to manually uncover all broken links. For WordPress blogs, you can install link checking plugins to automate the process. But, these plugins are resource intensive, and some web hosting companies (e.g., WPEngine) ban them outright. Alternatively, you may use web-based link checkers, such as Google Webmaster Tools and W3C. Generally, these tools lack the advanced features, for example, the use of regular expressions to filter URLs submitted for link checking.

This post is part 1 of a 2-part series to examine Linux desktop tools for discovering broken links. The first tool is linkchecker, followed by klinkstatus which is covered in the next post.

I ran each tool on this very blog "Linux Commando" which, to date, has 149 posts and 693 comments.

linkchecker runs on both the command line and the GUI. To install the command line version on Debian/Ubuntu systems:

$ sudo apt-get install linkchecker

Link checking often results in too much output for the user to sift through. A best practice is to run an initial exploratory test to identify potential issues, and to gather information for constraining future tests. I ran the following command as an exploratory test against this blog. The output messages are streamed to both the screen and an output file named errors.csv. The output lines are in the semicolon-separated CSV format.

$ linkchecker -ocsv http://linuxcommando.blogspot.com/ | tee errors.csv

Notes:

  • By default, 10 threads are generated to process the URLs in parallel. The exploratory test resulted in many timeouts during connection attempts. To avoid timeouts, I limit subsequent runs to only 5 threads (-t5), and increase the timeout threshold from 60 to 90 seconds(--timeout=90).
  • The exploratory test output was cluttered with warning messages such as access denied by robots.txt. For actual runs, we added the parameter --no-warnings to write only error messages.
  • This blog contains monthly archive pages, e.g., 2014_06_01_archive.html, which link to all actual content pages posted during the month. To avoid duplicating effort to check the content pages, I specified the parameter --no-follow-url=archive\.html to skip archive pages. If needed, you can specify more than one such parameter.
  • Embedded in the website are some external links which do not require link checking. For example, links to google.com. I can use the --ignore-url=google\.com parameter to specify a regular expression to filter them out. Note that, if needed, you can specify multiple occurrences of the parameter.

The revised command is as follows:

$ linkchecker -t5 --timeout=90 --no-warnings --no-follow-url=archive\.html --ignore-url=google\.com --ignore-url=blogger\.com -ocsv http://linuxcommando.blogspot.com/ | tee errors.csv

To visually inspect the output CSV file, open it using a spreadsheet program. Each link error is listed on a separate line, with the first 2 columns being the offending URLs and their parent URLs respectively.

Note that a bad URL can be reported multiple times in the file, often non-consecutively. One such URL is http://doncbex.myopenid.com/(highlighted in red). To make easier the inspection and analysis of the broken URLs, sort the lines by the first, i.e. URL, column.

A closer examination revealed that many broken URLs were not URLs I inserted in my website (including the red ones). So, where do they come from? To solve the mystery, I looked up their parent URLs. Lo and behold, those broken links were actually URL identifiers of the comment authors. Over time, some of those URLs had become obsolete. Because they were genuine comments, and provided value, I decided to keep them.

linkchecker did find 5 true broken links that needed fixing.

If you prefer not to use the command line interface, linkchecker has a front-end which you can install like this:

$ sudo apt-get install linkchecker-gui

Not all parameters are available on the front-end for you to directly modify. If a parameter is not on the GUI, such as skip warning messages, you need to edit the linkchecker configuration file. This is inconvenient, and a potential source of human error. Another missing feature is that you cannot suspend operation once the link checking is in progress.

If you want to use a GUI tool, I'd recommend klinkstatus which is covered in part 2 of this series.