Dist::Zilla::LocaleTextDomain for Translators

Here’s a followup on my post about localizing Perl modules with Locale::TextDomain. Dist::Zilla::LocaleTextDomain was great for developers, less so for translators. A Sqitch translator asked how to test the translation file he was working on. My only reply was to compile the whole module, then install it and test it. Ugh.

Today, I released Dist::Zilla::LocaleTextDomain v0.85 with a new command, msg-compile. This command allows translators to easily compile just the file they’re working on and nothing else. For pure Perl modules in particular, it’s pretty easy to test then. By default, the compiled catalog goes into ./LocaleData, where convincing the module to find it is simple. For example, I updated the test sqitch app to take advantage of this. Now, to test, say, the French translation file, all the translator has to do is:

> dzil msg-compile po/fr.po
[LocaleTextDomain] po/fr.po: 155 translated messages, 24 fuzzy translations, 16 untranslated messages.

> LANGUAGE=fr ./t/sqitch foo
"foo" n'est pas une commande valide

I hope this simplifies things for translators. See the notes for translators for a few more words on the subject.

Localize Your Perl modules with Locale::TextDomain and Dist::Zilla

I've just released Dist::Zilla::LocaleTextDomain v0.80 to the CPAN. This module adds support for managing Locale::TextDomain-based localization and internationalization in your CPAN libraries. I wanted to make it as simple as possible for CPAN developers to do localization and to support translators in their projects, and Dist::Zilla seemed like the perfect place to do it, since it has hooks to generate the necessary binary files for distribution.

Starting out with Locale::TextDomain was decidedly non-intuitive for me, as a Perl hacker, likely because of its gettext underpinnings. Now that I've got a grip on it and created the Dist::Zilla support, I think it's pretty straight-forward. To demonstrate, I wrote the following brief tutorial, which constitutes the main documentation for the Dist::Zilla::LocaleTextDomain distribution. I hope it makes it easier for you to get started localizing your Perl libraries.

Localize Your Perl modules with Locale::TextDomain and Dist::Zilla

Locale::TextDomain provides a nice interface for localizing your Perl applications. The tools for managing translations, however, is a bit arcane. Fortunately, you can just use this plugin and get all the tools you need to scan your Perl libraries for localizable strings, create a language template, and initialize translation files and keep them up-to-date. All this is assuming that your system has the gettext utilities installed.

The Details

I put off learning how to use Locale::TextDomain for quite a while because, while the gettext tools are great for translators, the tools for the developer were a little more opaque, especially for Perlers used to Locale::Maketext. But I put in the effort while hacking Sqitch. As I had hoped, using it in my code was easy. Using it for my distribution was harder, so I decided to write Dist::Zilla::LocaleTextDomain to make life simpler for developers who manage their distributions with Dist::Zilla.

What follows is a quick tutorial on using Locale::TextDomain in your code and managing it with Dist::Zilla::LocaleTextDomain.

This is my domain

First thing to do is to start using Locale::TextDomain in your code. Load it into each module with the name of your distribution, as set by the name attribute in your dist.ini file. For example, if your dist.ini looks something like this:

name    = My-GreatApp
author  = Homer Simpson <homer@example.com>
license = Perl_5

Then, in you Perl libraries, load Locale::TextDomain like this:

use Locale::TextDomain qw(My-GreatApp);

Locale::TextDomain uses this value to find localization catalogs, so naturally Dist::Zilla::LocaleTextDomain will use it to put those catalogs in the right place.

Okay, so it's loaded, how do you use it? The documentation of the Locale::TextDomain exported functions is quite comprehensive, and I think you'll find it pretty simple once you get used to it. For example, simple strings are denoted with __:

say __ 'Hello';

If you need to specify variables, use __x:

say __x(
    'You selected the color {color}',
    color => $color
);

Need to deal with plurals? Use __n

say __n(
    'One file has been deleted',
    'All files have been deleted',
    $num_files,
);

And then you can mix variables with plurals with __nx:

say __nx(
    'One file has been deleted.',
    '{count} files have been deleted.'",
    $num_files,
    count => $num_files,
);

Pretty simple, right? Get to know these functions, and just make it a habit to use them in user-visible messages in your code. Even if you never expect to translate those messages, just by doing this you make it easier for someone else to come along and start translating for you.

The setup

Now you're localizing your code. Great! What's next? Officially, nothing. If you never do anything else, your code will always emit the messages as written. You can ship it and things will work just as if you had never done any localization.

But what's the fun in that? Let's set things up so that translation catalogs will be built and distributed once they're written. Add these lines to your dist.ini:

[ShareDir]
[LocaleTextDomain]

There are actually quite a few attributes you can set here to tell the plugin where to find language files and where to put them. For example, if you used a domain different from your distribution name, e.g.,

use Locale::TextDomain 'com.example.My-GreatApp';

Then you would need to set the textdomain attribute so that the LocaleTextDomain does the right thing with the language files:

[LocaleTextDomain]
textdomain = com.example.My-GreatApp

Consult the LocaleTextDomain configuration docs for details on all available attributes.

(Special note until this Locale::TextDomain patch is merged: set the share_dir attribute to lib instead of the default value, share. If you use Module::Build, you will also need a subclass to do the right thing with the catalog files; see "Installation" in Dist::Zilla::Plugin::LocaleTextDomain for details.)

What does this do including the plugin do? Mostly nothing. You might see this line from dzil build, though:

[LocaleTextDomain] Skipping language compilation: directory po does not exist

Now at least you know it was looking for something to compile for distribution. Let's give it something to find.

Initialize languages

To add translation files, use the msg-init command:

> dzil msg-init de
Created po/de.po.

At this point, the gettext utilities will need to be installed and visible in your path, or else you'll get errors.

This command scans all of the Perl modules gathered by Dist::Zilla and initializes a German translation file, named po/de.po. This file is now ready for your German-speaking public to start translating. Check it into your source code repository so they can find it. Create as many language files as you like:

> dzil msg-init fr ja.JIS en_US.UTF-8
Created po/fr.po.
Created po/ja.po.
Created po/en_US.po.

As you can see, each language results in the generation of the appropriate file in the po directory, sans encoding (i.e., no .UTF-8 in the en_US file name).

Now let your translators go wild with all the languages they speak, as well as the regional dialects. (Don't forget to colour your code with en_UK translations!)

Once you have translations and they're committed to your repository, when you build your distribution, the language files will automatically be compiled into binary catalogs. You'll see this line output from dzil build:

[LocaleTextDomain] Compiling language files in po
po/fr.po: 10 translated messages, 1 fuzzy translation, 0 untranslated messages.
po/ja.po: 10 translated messages, 1 fuzzy translation, 0 untranslated messages.
po/en_US.po: 10 translated messages, 1 fuzzy translation, 0 untranslated messages.

You'll then find the catalogs in the shared directory of your distribution:

> find My-GreatApp-0.01/share -type f
My-GreatApp-0.01/share/LocaleData/de/LC_MESSAGES/App-Sqitch.mo
My-GreatApp-0.01/share/LocaleData/en_US/LC_MESSAGES/App-Sqitch.mo
My-GreatApp-0.01/share/LocaleData/ja/LC_MESSAGES/App-Sqitch.mo

These binary catalogs will be installed as part of the distribution just where Locale::TextDomain can find them.

Here's an optional tweak: add this line to your MANIFEST.SKIP:

^po/

This prevents the po directory and its contents from being included in the distribution. Sure, you can include them if you like, but they're not required for the running of your app; the generated binary catalog files are all you need. Might as well leave out the translation files.

Mergers and acquisitions

You've got translation files and helpful translators given them a workover. What happens when you change your code, add new messages, or modify existing ones? The translation files need to periodically be updated with those changes, so that your translators can deal with them. We got you covered with the msg-merge command:

> dzil msg-merge
extracting gettext strings
Merging gettext strings into po/de.po
Merging gettext strings into po/en_US.po
Merging gettext strings into po/ja.po

This will scan your module files again and update all of the translation files with any changes. Old messages will be commented-out and new ones added. Just commit the changes to your repository and notify the translation army that they've got more work to do.

If for some reason you need to update only a subset of language files, you can simply list them on the command-line:

> dzil msg-merge po/de.po po/en_US.po
Merging gettext strings into po/de.po
Merging gettext strings into po/en_US.po
What's the scan, man

Both the msg-init and msg-merge commands depend on a translation template file to create and merge language files. Thus far, this has been invisible: they will create a temporary template file to do their work, and then delete it when they're done.

However, it's common to also store the template file in your repository and to manage it directly, rather than implicitly. If you'd like to do this, the msg-scan command will scan the Perl module files gathered by Dist::Zilla and make it for you:

> dzil msg-scan
gettext strings into po/My-GreatApp.pot

The resulting .pot file will then be used by msg-init and msg-merge rather than scanning your code all over again. This actually then makes msg-merge a two-step process: You need to update the template before merging. Updating the template is done by exactly the same command, msg-scan:

> dzil msg-scan
extracting gettext strings into po/My-GreatApp.pot
> dzil msg-merge
Merging gettext strings into po/de.po
Merging gettext strings into po/en_US.po
Merging gettext strings into po/ja.po
Ship It!

And that's all there is to it. Go forth and localize and internationalize your Perl apps!

Acknowledgements

My thanks to Ricardo Signes for invaluable help plugging in to Dist::Zilla, to Guido Flohr for providing feedback on this tutorial and being open to my pull requests, to David Golden for I/O capturing help, and to Jérôme Quelin for his patience as I wrote code to do the same thing as Dist::Zilla::Plugin::LocaleMsgfmt without ever noticing that it already existed.

Today on the Perl Advent Calendar

Hey look everybody, I wrote today's Perl Advent Calendar post, Less Tedium, More Transactions. Go read it!

Up for Adoption: SVN::Notify

I’ve kept my various Perl modules in a Subversion server run by my Bricolage support company, Kineticode, for many years. However, I’m having to shut down the server I’ve used for all my services, including Subversion, so I’ve moved them all to GitHub. As such, I no longer use Subversion in my day-to-day work.

It no longer seems appropriate that I maintain SVN::Notify. This has probably been my most popular modules, and I know that it’s used a lot. It’s also relatively stable, with few bug reports or complaints. Nevertheless, there certainly could be some things that folks want to add, like TLS support, I18N, and inline CSS.

Therefore, SVN::Notify is formally up for adoption. If you’re a Subversion users, it’s a great tool. Just look at this sample output. If you’d like to take over maintenance, make it even better, please get in touch. Leave a comment on this post, or @theory me on Twitter, or send an email.

PS: Would love it if someone also could take over activitymail, the CVS notification script from which SVN::Notify was derived — and which I have even less right to maintain, given that I haven’t used CVS in years.

DBIx::Connector Exception Handling Design

In response to a bug report, I removed the documentation suggesting that one use the catch function exported by Try::Tiny to specify an exception-handling function to the DBIx::Connector execution methods. When I wrote those docs, Try::Tiny's catch method just returned the subroutine. It was later changed to return an object, and that didn't work very well. It seemed a much better idea not to depend on an external function that could change its behavior when there is no direct dependency on Try::Tiny in DBIx::Connector. I removed that documentation in 0.43. So instead of this:

$conn->run(fixup => sub {
    ...
}, catch {
    ...
});

It now recommends this:

$conn->run(fixup => sub {
    ...
}, catch => sub {
    ...
});

Which frankly is better balanced anyway.

But in discussion with Mark Lawrence in the ticket, it has become clear that there's a bit of a design problem with this approach. And that problem is that there is no try keyword, only catch. The fixup in the above example does not try, but the inclusion of the catch implicitly makes it behave like try. That also means that if you use the default mode (which can be set via the mode method), then there will usually be no leading keyword, either. So we get something like this:

$conn->run(sub {
    ...
}, catch => sub {
    ...
});

So it starts with a sub {} and no fixup keyword, but there is a catch keyword, which implicitly wraps that first sub {} in a try-like context. And aesthetically, it's unbalanced.

So I'm trying to decide what to do about these facts:

  • The catch implicitly makes the first sub be wrapped in a try-type context but without a try-like keyword.
  • If one specifies no mode for the first sub but has a catch, then it looks unbalanced.

At one level, I'm beginning to think that it was a mistake to add the exception-handling code at all. Really, that should be the domain of another module like Try::Tiny or, better, the language. In that case, the example would become:

use Try::Tiny;
try {
    $conn->run(sub {
        ...
    });
} catch {
  ....
}

And maybe that really should be the recommended approach. It seems silly to have replicated most of Try::Tiny inside DBIx::Connector just to cut down on the number of anonymous subs and indentation levels. The latter can be got round with some semi-hinky nesting:

try { $conn->run(sub {
    ...
}) } catch {
    ...
}

Kind of ugly, though. The whole reason the catch stuff was added to DBIx::Connector was to make it all nice and integrated (as discussed here). But perhaps it was not a valid tradeoff. I'm not sure.

So I'm trying to decide how to solve these problems. The options as I see them are:

  1. Add another keyword to use before the first sub that means "the default mode". I'm not keen on the word "default", but it would look something like this:

    $conn->run(default => sub {
        ...
    }, catch => sub {
        ...
    });
    

    This would provide the needed balance, but the catch would still implicitly execute the first sub in a try context. Which isn't a great idea.

  2. Add a try keyword. So then one could do this:

    $conn->run(try => sub {
        ...
    }, catch => sub {
        ...
    });
    

    This makes it explicit that the first sub executes in a try context. I'd also have to add companion try_fixup, try_ping, and try_no_ping keywords. Which are ugly. And furthermore, if there was no try keyword, would a catch be ignored? That's what would be expected, but it changes the current behavior.

  3. Deprecate the try/catch stuff in DBIx::Connector and eventually remove it. This would simplify the code and leave the responsibility for exception handling to other modules where it's more appropriate. But it would also be at the expense of formatting; it's just a little less aesthetically pleasing to have the try/catch stuff outside the method calls. But maybe it's just more appropriate.

I'm leaning toward #3, but perhaps might do #1 anyway, as it'd be nice to be more explicit and one would get the benefit of the balance with catch blocks for as long as they're retained. But I'm not sure yet. I want your feedback on this. How do you want to use exception-handling with DBIx::Connector? Leave me a comment here or on the ticket.

Encoding is a Headache

I have to spend way too much of my programming time worrying about character encodings. Take my latest module, Text::Markup for example. The purpose of the module is very simple: give in the name of a file, and it will figure out the markup it uses (HTML, Markdown, Textile, whatever) and return a string containing the HTML generated from the file. Simple, right?

But, hang on. Should the HTML it returns be decoded to Perl’s internal form? I’m thinking not, because the HTML itself might declare the encoding, either in a XML declaration or via something like

<meta http-equiv="Content-type" content="text/html;charset=Big5" />

And as you can see, it’s not UTF-8. So decoded it would be lying. So it should be encoded, right? Parsers like XML::LibXML::Parser are smart enough to see such declarations and decode as appropriate.

But wait a minute! Some markup languages, like Markdown, don’t have XML declarations or headers. They’re HTML fragments. So there’s no wait to tell the encoding of the resulting HTML unless it’s decoded. So maybe it should be decoded. Or perhaps it should be decoded, and then given an XML declaration that declares the encoding as UTF-8 and encoded it as UTF-8 before returning it.

But, hold the phone! When reading in a markup file, should it be decoded before it’s passed to the parser? Does Text::Markdown know or care about encodings? And if it should be decoded, what encoding should one assume the source file uses? Unless it uses a BOM, how do you know what its encoding is?

Text::Markup is a dead simple idea, but virtually all of my time is going into thinking about this stuff. It drives me nuts. When will the world cease to be this way?

Oh, and you have answers to any of these questions, please do feel free to leave a comment. I hate having to spend so much time on this, but I’d much rather do so and get things right (or close to right) than wrong.

Serious Exception-Handling Bug Fixed in DBIx::Connector 0.42

I’ve just released DBIx::Connector 0.42 to CPAN. This release fixes a serious bug with catch blocks. In short, if you threw an exception from inside a catch block, it would not be detectable from outside. For example, given this code:

eval {
    $conn->run(sub { die 'WTF' }, catch => sub { die 'OMG!' });
};
if (my $err = $@) {
    say "We got an error: $@\n";
}

With DBIx::Connector 0.41 and lower, the if block would never be called, because even though the catch block threw an exception, $@ was not set. In other words, the exception would not be propagated to its caller. This could be terribly annoying, as you can imagine. I was being a bit too clever about localizing $@, with the scope much too broad. 0.42 uses a much tighter scope to localize $@, so now it should propagate properly everywhere.

So if you’re using DBIx::Connector catch blocks, please upgrade ASAP. Sorry for the hassle.

Handling Multiple Exceptions

I ran into an issue with DBIx::Connector tonight: SQLite started throwing an exception from within a call to rollback(): “DBD::SQLite::db rollback failed: cannot rollback transaction – SQL statements in progress”. This is rather annoying, as it ate the underlying exception that led to the rollback.

So I've added a test to DBIx::Connector that looks like this:

my $dmock = Test::MockModule->new($conn->driver);
$dmock->mock(rollback => sub { die 'Rollback WTF' });

eval { $conn->txn(sub {
    my $sth = shift->prepare("select * from t");
    die 'Transaction WTF';
}) };

ok my $err = $@, 'We should have died';
like $err, qr/Transaction WTF/, 'Should have the transaction error';

It fails as expected: the error is “Rollback WTF”. So far so good. Now the question is, how should I go about fixing it? Ideally I'd be able to access both exceptions in whatever exception handling I do. How to go about that?

I see three options. The first is that taken by Bricolage and DBIx::Class: create a new exception that combines both the transaction exception and the rollback exception into one. DBIx::Class does it like this:

$self->throw_exception(
  "Transaction aborted: ${exception}. "
  . "Rollback failed: ${rollback_exception}"
);

That’s okay as far as it goes. But what if $exception is an Exception::Class::DBI object, or some other exception object? It would get stringified and the exception handler would lose the advantages of the object. But maybe that doesn’t matter so much, since the rollback exception is kind of important to address first?

The second option is to throw a new exception object with the original exceptions as attributes. Something like (pseudo-code):

DBIx::Connector::RollbackException->new(
    txn_exception      => $exception,
    rollback_exception => $rollback_exception,
);

This has the advantage of keeping the original exception as an object, although the exception handler would have to expect this exception and go digging for it. So far in DBIx::Connector, I've left DBI exception construction up to the DBI and to the consumer, so I'm hesitant to add a one-off special-case exception object like this.

The third option is to use a special variable, @@, and put both exceptions into it. Something like:

@@ = ($exception, $rollback_exception);
die $rollback_exception;

This approach doesn’t require a dependency like the previous approach, but the user would still have to know to dig into @@ if they caught the rollback exception. But then I might as well have thrown a custom exception object that’s easier to interrogate than an exception string. Oh, and is it appropriate to use @@? I seem to recall seeing some discussion of this variable on the perl5-porters mail list, but it’s not documented or supported. Or something. Right?

What would you do?

Fuck Typing LWP

I'm working on a project that fetches various files from the Internet via LWP. I wanted to make sure that I was a polite user, such that my app would pay attention to Last-Modified/If-Modified-Since and ETag/If-None-Match headers. And in most contexts I also want to respect the robots.txt file on the hosts to which I'm sending requests. So I was very interested to read chromatic’s hack for this very issue. I happily implemented two classes for my app, MyApp::UA, which inherits from LWP::UserAgent::WithCache, and MyApp::UA::Robot, which inherits from MyApp::UA but changes LWP::UserAgent::WithCache to inherit from LWP::UARobot:

@LWP::UserAgent::WithCache::ISA = ('LWP::RobotUA');

So far so good, right? Well, no. What I didn’t think about, stupidly, is that by changing LWP::UserAgent::WithCache’s base class, I was doing so globally. So now both MyApp::UA and MyApp::UA::Robot were getting the LWP::RobotUA behavior. Urk.

So my work around is to use a little fuck typing to ensure that MyApp::UA::Robot has the robot behavior but MyApp::UA does not. Here’s what it looks like (BEWARE: black magic ahead!):

package MYApp::UA::Robot;

use 5.12.0;
use utf8;
use parent 'MyApp::UA';
use LWP::RobotUA;

do {
    # Import the RobotUA interface. This way we get its behavior without
    # having to change LWP::UserAgent::WithCache's inheritance.
    no strict 'refs';
    while ( my ($k, $v) = each %{'LWP::RobotUA::'} ) {
        *{$k} = *{$v}{CODE} if *{$v}{CODE} && $k ne 'new';
    }
};

sub new {
    my ($class, $app) = (shift, shift);
    # Force RobotUA configuration.
    local @LWP::UserAgent::WithCache::ISA = ('LWP::RobotUA');
    return $class->SUPER::new(
        $app,
        delay => 1, # be very nice -- max one hit per minute.
    );
}

The do block is where I do the fuck typing. It iterates over all the symbols in LWP::RobotUA, inserts a reference to all subroutines into the current package. Except for new, which I implement myself. This is so that I can keep my inheritance from MyApp::UA intact. But in order for it to properly configure the LWP::RobotUA interface, new must temporarily fool Perl into thinking that LWP::UserAgent::WithCache inherits from LWP::RobotUA.

Pure evil, right? Wait, it gets worse. I've also overridden LWP::RoboUA’s host_wait method, because if it’s the second request to a given host, I don’t want it to sleep (the first request is for the robots.txt, and I see no reason to sleep after that). So I had to modify the do block to skip both new and host_wait:

    while ( my ($k, $v) = each %{'LWP::RobotUA::'} ) {
        *{$k} = *{$v}{CODE} if *{$v}{CODE} && $k !~ /^(?:new|host_wait)$/;
    }

If I “override” any other LWP::RobotUA methods, I'll need to remember to add them to that regex. Of course, since I'm not actually inheriting from LWP::RobotUA, in order to dispatch to its host_wait method, I can’t use SUPER, but must dispatch directly:

sub host_wait {
    my ($self, $netloc) = @_;
    # First visit is for robots.txt, so let it be free.
    return if !$netloc || $self->no_visits($netloc) < 2;
    $self->LWP::RobotUA::host_wait($netloc);
}

Ugly, right? Yes, I am an evil bastard. “Fuck typing” is right, yo! At least it’s all encapsulated.

This just reinforces chromatic’s message in my mind. I'd sure love to see LWP reworked to use roles!

Defend Against Programmer Mistakes?

I get email:

Hey David,

I ran in to an issue earlier today in production that, while it is an error in my code, DBIx::Connector could easily handle the issue better. Here's the use case:

package Con;
use Moose;
sub txn {
    my ($self, $code) = @_;
    my @ret;
    warn "BEGIN EVAL\n";
    eval{ @ret = $code->() };
    warn "END EVAL\n";
    die "DIE: $@" if $@;
    return @ret;
}
package main;
my $c = Con->new();
foreach (1..2) {
    $c->txn(sub{ next; });
}

The result of this is:

BEGIN EVAL
Exiting subroutine via next at test.pl line 16.
Exiting eval via next at test.pl line 16.
Exiting subroutine via next at test.pl line 16.
BEGIN EVAL
Exiting subroutine via next at test.pl line 16.
Exiting eval via next at test.pl line 16.
Exiting subroutine via next at test.pl line 16.

This means that any code after the eval block is not executed. And, in the case of DBIx::Connector, means the transaction is not commited or rolled back, and the next call to is txn() mysteriously combined with the previous txn() call. A quick fix for this is to just add a curly brace in to the eval:

eval{ { @ret = $code->() } };

Then the results are more what we'd expect:

BEGIN EVAL
Exiting subroutine via next at test.pl line 16.
END EVAL
BEGIN EVAL
Exiting subroutine via next at test.pl line 16.
END EVAL

I've fixed my code to use return; instead of next;, but I think this would be a useful fix for DBIx::Connector so that it doesn't act in such an unexpected fashion when the developer accidentally calls next.

The fix here is pretty simple, but I'm not sure I want to get into the business of defending against programmer mistakes like this in DBIx::Connector or any module.

What do you think?

A couple months ago, RJBS and I collaborated on adding a new feature to Pod: sane URL links. For, well, ever, the case has been that to link to URLs or any other scheme: links in Pod, You had to do something like this:

For more information, consult the pgTAP documentation:
L<http://pgtap.projects.postgresql.org/documentation.html>

The reasons why you couldn't include text in the link to server as the link text has never been really well spelled-out. Sean Burke, the most recent author of the Pod spec, had only said that the support wasn't there "for various reasons."

Meanwhile, I accidentally discovered that Pod::Simple has in fact supported such formats for a long time. At some point Sean added it, but didn't update the spec. Maybe he thought it was fragile. I have no idea. But since the support was already there, and most of the other Pod tools already support it or want to, it was a simple change to make to the spec, and it was released in Perl 5.11.3 and Pod::Simple 3.11. It's now officially a part of the spec. The above Pod can now be written as:

For more information, consult the L<pgTAP
documentation|http://pgtap.projects.postgresql.org/documentation.html>.

So much better! And to show it off, I've just updated all the links in SVN::Notify and released a new version. Check it out on CPAN Search. See how the links such as to "HookStart.exe" and "Windows Subversion + Apache + TortoiseSVN + SVN::Notify HOWTO" are nice links? They no longer use the URL for the link text. Contrast with the previous version.

And as of yesterday, the last piece to allow this went into place. Andy gave me maintenance of Test::Pod, and I immediately released a new version to allow the new syntax. So update your t/pod.t file to require Test::Pod 1.41, update your links, and celebrate the arrival of sane links in Pod documentation.

Make the Pragmas Stop!

I've been following the development of a few things in the Perl community lately, and it’s leaving me very frustrated. For years now, I've written modules that start with the same incantation:

package My::Module;

use strict;
our $VERSION = '0.01';

Pretty simple: declare the module name and version, and turn on strictures to make sure I'm not doing anything stupid. More recently I've added use warnings; as a best practice. And even more recently, I've started adding use utf8;, too, because I like to write my code in UTF-8. And I like to turn on all of the Perl 5.10 features. It’s mildly annoying to have the same incantation at the start of every module, but I could deal with it:

package My::Module;

use strict;
use warnings;
use feature ':5.10';
use utf8;

our $VERSION = '0.01';

Until now that is. Last year, chromatic started something with his Modern::Perl module. It was a decent idea for newbies to help them get started with Perl by having to have only one declaration at the tops of their modules:

package My::Module;

use Modern::Perl;
our $VERSION = '0.01';

Alas, it wasn’t really designed for me, but for more casual users of Perl, so that they don’t have to think about the pragmas they need to use. The fact that it doesn’t include the utf8 pragma also made it a non-starter for me. Or did it? Someone recently suggested that the utf8 pragma has problems (I can’t find the Perl Monks thread at the moment). Others report that the encoding pragma has issues, too. So what’s the right thing to do with regard to assuming everything is UTF8 in my program and its inputs (unless I say otherwise)? I'm not at all sure.

Not only that, but Modern::Perl has lead to an explosion of other pragma-like modules on CPAN that promise best pragma practices. There’s common::sense, which loads utf8 but only some of of the features of strict, warnings, and feature. uni::perl looks almost exactly the same. There’s also Damian Conway’s Toolkit, which allows you to write your own pragma-like loader module. There’s even Acme::Very::Modern::Perl, which is meant to be a joke, but is it really?

If I want to simplify the incantation at the top of every file, what do I use?

And now it’s getting worse. In addition to feature, Perl 5.11 introduces the legacy pragma, which allows one to get back behaviors from older Perls. For example, to get back the old Unicode semantics, you'd use legacy 'unicode8bit';. I mean, WTF?

I've had it. Please make the pragma explosion stop! Make it so that the best practices known at the time of the release of any given version of Perl can automatically imported if I just write:

package My::Module '0.01';
use 5.12;

That’s it. Nothing more. Whatever has been deemed the best practice at the time 5.12 is released will simply be used. If the best practices change in 5.14, I can switch to use 5.14; and get them, or just leave it at use 5.12 and keep what was the best practices in 5.12 (yay future-proofing!).

What should the best practices be? My list would include:

  • strict
  • warnings
  • features — all of them
  • UTF-8 — all input and output to the scope, as well as the source code

Maybe you disagree with that list. Maybe I'd disagree with what Perl 5 Porters settles on. But then you can I can read what’s included and just add or removed pragmas as necessary. But surely there’s a core list of likely candidates that should be included the vast majority of the time, including for all novices.

In personal communication, chromatic tells me, with regard to Modern::Perl, “Experienced Perl programmers know the right incantations to get the behavior they want. Novices don’t, and I think we can provide them much better defaults without several lines of incantations.” I'm fine with the second assertion, but disagree with the first. I've been hacking Perl for almost 15 years, and I no longer have any fucking idea what incantation is best to use in my modules. Do help the novices, and make the power tools available to experienced hackers, but please make life easier for the experienced hackers, too.

I think that declaring the semantics of a particular version of Perl is where the Perl 5 Porters are headed. I just hope that includes handling all of the likely pragmas too, so that I don’t have to.

Test Everything with TAP Source Handlers

I've just arrived in Japan with my family. We're going to be spending several days in Tokyo, during which time I'll be at the JPUG 10th Anniversary PostgreSQL Conference for a couple of days (giving the usual talk), but mainly I'll be on vacation. We'll be visiting Kyoto, too. We're really excited about this trip; it'll be a great experience for Anna. I'll be back in the saddle in December, so for those of you anxiously awaiting the next installment of my Catalyst tutorial, I'm afraid you'll have to wait a bit longer.

In the meantime, I wanted to write about a little something that's been cooking for a while. Over the last several months, Steve Purkis has been working on a new feature for TAP::Parser: source handlers. The idea is to make it easier for developers to add support for TAP emitters other than Perl. The existing implementation did a decent job of handling Perl test scripts, of course, and executable files (useful for compiled tests in C using libtap, for example), but anything else was difficult.

As the author of pgTAP, I was of course greatly interested in this work, because I had to bend over backwards to get pg_prove to work nicely. It's even uglier to get a Module::Build-based distribution to run pgTAP and Perl tests all at once in during ./Build test: You had to subclass Module::Build to do it.

Steve wanted to solve this problem, and he did. Then he was kind enough to listen to my bitching an moaning and rewrite his fix so that it was simpler for third parties (read: me) to add new source handlers. What's a source handler, you ask? Check out the latest dev release of Test::Harness and you'll see it: TAP::Parser::SourceHandler. As soon as Steve committed it, I jumped in and implemented a new handler for pgTAP. The cool thing is that it took me only three hours to do, including tests. And here's how you use it in a Build.PL, so that you can have pgTAP tests named *.pg run at the same time as your *.t Perl tests:

Module::Build->new(
    module_name        => 'MyApp',
    test_file_exts     => [qw(.t .pg)],
    use_tap_harness    => 1,
    tap_harness_args   => {
        sources => {
            Perl  => undef,
            pgTAP => {
                dbname   => 'try',
                username => 'postgres',
                suffix   => '.pg',
            },
        }
    },
    build_requires     => {
        'Module::Build'                      => '0.30',
        'TAP::Parser::SourceHandler::pgTAP' => '3.19',
    },
)->create_build_script;

To summarize, you just have to:

  • Tell Module::Build the extensions of your test scripts (that's qw(.t .pg) here)
  • Specify the Perl source with its defaults (that's what the undef does)
  • Specify the pgTAP options (database name, username, suffix, and lots of other potential settings)

And that's it. You're done! Run your tests with the usual incantation:

perl Build.PL
./Build test

You can use pgTAP and its options with prove, too, via the --source and --pgtap-option options:

prove --source pgTAP --pgtap-option dbname=try \
                     --pgtap-option username=postgres \
                     --pgtap-option suffix=.pg \
                     t/sometest.pg

It's great that it's now so much easier to support pgTAP tests, but what if you want to have Ruby tests? Or PHP? Well, it's a simple process to write your own source handler. Here's how:

  • Subclass TAP::Parser::SourceHandler. The final part of the package name is the name of the source. Thus if you wrote TAP::Parser::SourceHandler::Ruby, the name of your source would be "Ruby".

  • Load the necessary modules and register your source handler. For a Ruby source handler, it might look like this:

    package TAP::Parser::SourceHandler::Ruby;
    use strict;
    use warnings;
    
    use TAP::Parser::IteratorFactory   ();
    use TAP::Parser::Iterator::Process ();
    TAP::Parser::IteratorFactory->register_handler(__PACKAGE__);
    
  • Implement the can_handle() method. The task of this method is to return a score between 0 and 1 for how likely it is that your source handler can handle a given source. A bunch of information is passed in a hash to the method, so you can check it all out. For example, if you wanted to run Ruby tests ending in .rb, you might write something like this:

    sub can_handle {
      my ( $class, $source ) = @_;
      my $meta = $source->meta;
    
      # If it's not a file (test script), we're not interested.
      return 0 unless $meta->{is_file};
    
      # Get the file suffix, if any.
      my $suf = $meta->{file}{lc_ext};
    
      # If the config specifies a suffix, it's required.
      if ( my $config = $source->config_for('Ruby') ) {
          if ( defined $config->{suffix} ) {
              # Return 1 for a perfect score.
              return $suf eq $config->{suffix} ? 1 : 0;
          }
      }
    
      # Otherwise, return a score for our supported suffix.
      return $suf eq '.rb' ? 0.8 : 0;
    }
    

    The last line is the most important: it returns 0.8 if the suffix is .rb, saying that it's likely that this handler can handle the test. But the middle bit is interesting, too. The $source->config_for('Ruby') call is seeing if the user specified a suffix, either via the command-line or in the options. So in a Build.PL, that might be:

      tap_harness_args => {
          sources => {
              Perl => undef,
              Ruby => { suffix => '.rub' },
          }
      },
    

    Meaning that the user wanted to run tests ending in .rub as Ruby tests. It can also be done on the command-line with prove:

    prove --source Ruby --ruby-option suffix=.rub
    

    Cool, eh? We have a reasonable default for Ruby tests, .rb, but the user can override however she likes.

  • And finally, implement the make_iterator() method. The job of this method is simply to create a TAP::Parser::Iterator object to actually run the test. It might look something like this:

    sub make_iterator {
      my ( $class, $source ) = @_;
      my $config = $source->config_for('Ruby');
    
      my $fn = ref $source->raw ? ${ $source->raw } : $source->raw;
      $class->_croak(
          'No such file or directory: ' . defined $fn ? $fn : ''
      ) unless $fn && -e $fn;
    
      return TAP::Parser::Iterator::Process->new({
          command => [$config->{ruby} || 'ruby', $fn ],
          merge   => $source->merge
      });
    }
    

    Simple, right? Just make sure we have a valid file to execute, then instantiate and return a TAP::Parser::Iterator::Process object to actually run the test.

That's it. Just two methods and you're ready to go. I've even added support for a suffix option and a ruby option (so that you can point to the ruby executable in case it's not in your path). Using it is easy. I wrote a quick TAP-emitting Ruby script like so:

puts 'ok 1 - This is a test'
puts 'ok 2 - This is another test'
puts 'not ok 3 - This is a failed test'

And to run this test (assuming that TAP::Parser::SourceHandler::Ruby has been installed somewhere where Perl can find it), it's just:

% prove --source Ruby ~/try.rb --verbose
/Users/david/try.rb .. 
ok 1 - This is a test
ok 2 - This is another test
not ok 3 - This is a failed test
Failed 1/3 subtests 

Test Summary Report
-------------------
/Users/david/try.rb (Wstat: 0 Tests: 3 Failed: 1)
  Failed test:  3
  Parse errors: No plan found in TAP output
Files=1, Tests=3,  0 wallclock secs ( 0.02 usr +  0.01 sys =  0.03 CPU)
Result: FAIL

It's so easy to create new source handlers now, especially if all you have to do is support a new dynamic language. I've put the simple Ruby example over here; feel free to take it and run with it!

Testing Catalyst Template::Declare Views

Now that we have our default Catalyst tests passing, let's have a look at testing the views we've created. You can follow along via the Part 6 tag tag in the GitHub repository. Start by looking at the default test script for our HTML view, t/view_HTML.t. It should look something like this:

use strict;
use warnings;
use Test::More tests => 3;
# use Test::XPath;

BEGIN {
    use_ok 'MyApp::View::HTML' or die;
    use_ok 'MyApp' or die;
}

ok my $view = MyApp->view('HTML'), 'Get HTML view object';

# ok my $output = $view->render(undef, 'hello', { user => 'Theory' }),
#     'Render the "hello" template';

# Test output using Test::XPath or similar.
# my $tx = Test::XPath->new( xml => $output, is_html => 1);
# $tx->ok('/html', 'Should have root html element');
# $tx->is('/html/head/title', 'Hello, Theory', 'Title should be correct');

Yeah, this looks a bit different that the view test created for Template Toolkit or Mason views. That's because Catalyst::View::TD ships with its own test script template. One of the advantage is that it shows off testing the view without having to instantiate the entire app or send mock HTTP requests. These are unit tests, after all: we want to make sure that the view templates do what they want, not test an entire request process. The latter is more appropriate for integration tests, which I'll cover later.

So let's have a look at this test script. The first commented-out statement is:

# ok my $output = $view->render(undef, 'hello', { user => 'Theory' }),
#     'Render the "hello" template';

What this is showing us is that one can use the view's render() method to execute a view without a context object, thus saving the expense of initializing the application. And if you have templates that don't rely on it, I highly recommend this approach for keeping your tests fast. Even if the use of the the context object is fairly minimal, you can use Test::MockObject to mock up a context object like so:

use Test::MockObject;
my $c = Test::MockObject->new;
$c->mock(uri_for => sub { $_[1] });
$c->mock(config  => sub { { name => 'MyApp' } });
$c->mock(debug   => sub { });
$c->mock(log     => sub { });

ok my $output = $view->render($c, 'hello', { user => 'Theory' }),
     'Render the "hello" template';

Then you can use the mock() method to mock more methods as your template uses them.

Alas, our app has already passed the point where that seems worthwhile. So far we have just one template, books/list, and it requires that there also be a database statement handle available. Sure we could create a database connection and prepare a statement handle. But that would start to require a fair bit more code to set up. So let's just instantiate the application object and be done with it. Change the test plan to 5:

use Test::More tests => 5;

Then change the test body after the BEGIN block to:

# Instantiate the context object and the view.
ok my $c = MyApp->new, 'Create context object';
ok my $view = $c->view('HTML'), 'Get HTML view object';

# Create a statement handle for books/list.
my $sth = $c->conn->run(sub { $_->prepare(q{
    SELECT isbn, title, rating, authors FROM books_with_authors
}) });
$sth->execute;

# Render books/list.
ok my $output = $view->render($c, 'books/list', {
    title => 'Book List',
    books => $sth,
}), 'Render the "books/list" template';

This allows us to get a full test of the view.

% prove --lib --verbose t/view_HTML.t
t/view_HTML.t .. 
1..5
ok 1 - use MyApp::View::HTML;
ok 2 - use MyApp;
ok 3 - Create context object
ok 4 - Get HTML view object
Explicit blessing to '' (assuming package main) at /usr/local/lib/perl5/site_perl/5.10.1/Catalyst.pm line 1281.
Explicit blessing to '' (assuming package main) at /usr/local/lib/perl5/site_perl/5.10.1/Catalyst.pm line 1281.
Explicit blessing to '' (assuming package main) at /usr/local/lib/perl5/site_perl/5.10.1/Catalyst.pm line 1281.
Explicit blessing to '' (assuming package main) at /usr/local/lib/perl5/site_perl/5.10.1/Catalyst.pm line 1281.
ok 5 - Render the "books/list" template
ok
All tests successful.
Files=1, Tests=5,  1 wallclock secs ( 0.02 usr  0.00 sys +  0.69 cusr  0.06 csys =  0.77 CPU)
Result: PASS

Hrm. Those warnings are rather annoying. Looking at Catalyst.pm I see that they come from the uri_for() method. I expect that they somehow result from a lack of state in the context object. That's not really important for our unit tests, so let's just mock that one method to do something reasonable. Add this code after instantiating the context object but before rendering the view:

use Test::MockObject::Extends;
my $mocker = Test::MockObject::Extends->new($c);
$mocker->mock( uri_for => sub { $_[1]} );

And now we get:

% prove --lib --verbose t/view_HTML.t
t/view_HTML.t .. 
1..5
ok 1 - use MyApp::View::HTML;
ok 2 - use MyApp;
ok 3 - Create context object
ok 4 - Get HTML view object
ok 5 - Render the "books/list" template
ok
All tests successful.
Files=1, Tests=5,  1 wallclock secs ( 0.02 usr  0.01 sys +  0.77 cusr  0.07 csys =  0.87 CPU)
Result: PASS

Ah, much better! And thanks to our mock, we also have a much better idea of what will be returned from uri_for(), which will be important for later tests.

Now that we have things properly mocked up and the objects created such that we can actually get the template to render, it's time to test the output from the template. For HTML and XML format, I like the Test::XPath module. In fact, it's for this very use that I wrote Test::XPath. It's great because it allows me to effectively test the correctness of the template output. Here's the basic outline:

# Test output using Test::XPath.
my $tx = Test::XPath->new( xml => $output, is_html => 1);
test_basics($tx, 'Book List');

# Call this function for every request to make sure that they all
# have the same basic structure.
sub test_basics {
    my ($tx, $title) = @_;

    # Some basic sanity-checking.
    $tx->is( 'count(/html)',      1, 'Should have 1 html element' );
    $tx->is( 'count(/html/head)', 1, 'Should have 1 head element' );
    $tx->is( 'count(/html/body)', 1, 'Should have 1 body element' );

    # Check the head element.
    $tx->is(
        '/html/head/title',
        $title,
        'Title should be corect'
    );
    $tx->is(
        '/html/head/link[@type="text/css"][@rel="stylesheet"]/@href',
        '/static/css/main.css',
        'Should load the CSS',
    );
}

I've set up the test_basics() function to test the things that should be mostly the same for every request. This will mainly cover the output of the wrapper, and includes things like making sure that there is just one <html> tag, one <head> tag, and one <body> tag; and that the title and CSS-related elements are output properly. Running this (with the test plan set to no_plan as I develop), I get:

% prove --lib t/view_HTML.tt
t/view_HTML.t .. 2/? 
#   Failed test 'Should load the CSS'
#   at t/view_HTML.t line 52.
#          got: ''
#     expected: '/static/css/main.css'
# Looks like you failed 1 test of 10.
t/view_HTML.t .. Dubious, test returned 1 (wstat 256, 0x100)
Failed 1/10 subtests 

Test Summary Report
-------------------
t/view_HTML.t (Wstat: 256 Tests: 10 Failed: 1)
  Failed test:  10
  Non-zero exit status: 1
Files=1, Tests=10,  1 wallclock secs ( 0.02 usr  0.01 sys +  0.79 cusr  0.08 csys =  0.90 CPU)
Result: FAIL

Hrm. Let's stick a diag $output in there and see what we get. Now the output includes this bit:

# <html>
#  <head>
#   <title>Book List</title>
#   <link rel="stylesheet" href="/static/css/main.css" />
#  </head>

Ah! the <link> element for the stylesheet is missing the type attribute. So let's add it. Edit lib/MyApp/Templates/HTML.pm and change the proper bit of the wrapper template to:

link {
    rel is 'stylesheet';
    type is 'text/css';
    href is $c->uri_for('/static/css/main.css' );
};

Note the addition of the type attribute. Now when we run the tests (removing the diag), we get:

% prove --lib t/view_HTML.t
t/view_HTML.t .. ok    
All tests successful.
Files=1, Tests=10,  1 wallclock secs ( 0.02 usr  0.00 sys +  0.78 cusr  0.07 csys =  0.87 CPU)
Result: PASS

Ah, much better! A lot more testing should go in there to make sure that the wrapper is doing things right. I've committed such testing, so check it out.

Now we need to test the output specific to the books/list template. Below the call to test_bascis(), add this code:

$tx->ok('/html/body/div[@id="bodyblock"]/div[@id="content"]/table', sub {
    $_->is('count(./tr)', 6, 'Should have seven rows' );
    $_->ok('./tr[1]', sub {
        $_->is('count(./th)', 3, 'Should have three table headers');
        $_->is('./th[1]', 'Title', '... first is "Title"');
        $_->is('./th[2]', 'Rating', '... second is "Rating"');
        $_->is('./th[3]', 'Authors', '... third is "Authors"');
    }, 'Should have first table row')
}, 'Should have a table');

Notice the nested block there? Test::XPath supports passing blocks to its ok() method, so that you can naturally scope your tests to blocks of XML and HTML. Neat, huh? If you don't like the use of $_, the test object is also passed as the sole argument to such blocks.

Anyway, these tests makes sure that the table is where it should be, has the proper number of rows, and that the first row has three headers with their proper values. The test outputs:

% prove --lib t/view_HTML.tt
t/view_HTML.t .. 1/? 
#   Failed test '... third is "Authors"'
#   at t/view_HTML.t line 42.
#          got: 'Author'
#     expected: 'Authors'
# Looks like you failed 1 test of 28.
t/view_HTML.t .. Dubious, test returned 1 (wstat 256, 0x100)
Failed 1/28 subtests 

Test Summary Report
-------------------
t/view_HTML.t (Wstat: 256 Tests: 28 Failed: 1)
  Failed test:  28
  Non-zero exit status: 1
Files=1, Tests=28,  1 wallclock secs ( 0.03 usr  0.01 sys +  0.79 cusr  0.08 csys =  0.91 CPU)
Result: FAIL

Whoops! Looks like I forgot to change the header when I changed the template to output a list of authors last week. So edit lib/MyApp/Templates/HTML/Books.pm and change the template to output "Authors" instead of "Author":

row {
    th { 'Title'   };
    th { 'Rating'  };
    th { 'Authors' };
};

And now all tests pass again:

% prove --lib t/view_HTML.t
t/view_HTML.t .. ok    
All tests successful.
Files=1, Tests=28,  1 wallclock secs ( 0.02 usr  0.01 sys +  0.78 cusr  0.09 csys =  0.90 CPU)
Result: PASS

Great. So let's finish testing the rest of the output. Ah, but wait! We have on ORDER BY clause on the query, so the order in which the books will be output is undefined. So let's add an ORDER BY clause. Change the creation of the statement handle in the test file to:

my $sth = $c->conn->run(sub { $_->prepare(q{
    SELECT isbn, title, rating, authors
      FROM books_with_authors
     ORDER BY title
}) });

And now you can start to see why I use the q{} operator for SQL queries. You should also note that the inputs for the view test are now different than those from the controller, which still has no ORDER BY clause. It's likely that we'll want to go back and change that later, but I bring it up here to highlight the difference from integration tests -- and to emphasize that we'll need to write those integration tests at some point!

But back to the view unit tests. We can now test the contents of the table by adding code after the test for ./tr[1]. Here's what the test for the next row looks like:

$_->ok('./tr[2]', sub {
    $_->is('count(./td)', 3, 'Should have three cells');
    $_->is(
        './td[1]',
        'CCSP SNRS Exam Certification Guide',
        '... first is "CCSP SNRS Exam Certification Guide"'
    );
    $_->is('./td[2]', 5, '... second is "5"');
    $_->is(
        './td[3]',
        'Bastien, Nasseh, Degu',
        '... third is "Bastien, Nasseh, Degu"',
    );
}, 'Should have second table row');

The other rows can be similarly tested; have a look at the commit to see all the new tests.

This reminds me, however, that we never created an order for the list of authors. So it's possible that this test could fail, as the order of the author last names is undefined. We should go back and fix that, probably by listing the authors as they are actually listed on the cover of the book. But in the meantime, our test of this view is done.

Next up, I think I'll hit controller tests. So come on back!

Testing the Tutorial App

Yet another entry in my ongoing attempt to rewrite the Catalyst tutorial in my own coding style.

So far, I've been following the original tutorial pretty closely. But now I want to skip ahead a bit to chapter 8: testing. I skip because, really, we should be writing tests from the very beginning. They shouldn’t be an afterthought stuck in the penultimate chapter of a tutorial. So let’s write some tests. You can follow along in the Part 5 tag in the GitHub repository.

Oops, A Missing Dependency

Oh, wait! I forgot to tell the build system that we now depend on Catalyst::View::TD and DBIx::Connector. So add these two lines to Makefile.PL:

requires 'Catalyst::View::TD' => '0.11';
requires 'DBIx::Connector' => '0.30';

Okay, now we can write some tests.

STFU

Well, no, actually, let’s start by running the tests we have:

perl Makefile.PL
make test

You should see some output after this — lots of stuff, actually — ending something like this:

[debug] Loaded Path actions:
.-------------------------------------+--------------------------------------.
| Path                                | Private                              |
+-------------------------------------+--------------------------------------+
| /                                   | /index                               |
| /                                   | /default                             |
| /books                              | /books/index                         |
| /books/list                         | /books/list                          |
'-------------------------------------+--------------------------------------'

[info] MyApp powered by Catalyst 5.80013
t/view_HTML.t ......... ok   
All tests successful.
Files=5, Tests=8,  3 wallclock secs ( 0.04 usr  0.02 sys +  2.19 cusr  0.25 csys =  2.50 CPU)
Result: PASS

I don’t know about you, but having all that debugging crap just drives me nuts while I'm running tests. It’s helpful while doing development, but mainly just gets in the way of the tests. So let’s get rid of them. Open up lib/MyApp.pm and change the use Catalyst statement to:

use Catalyst (qw(
    ConfigLoader
    Static::Simple
    StackTrace
), $ENV{HARNESS_ACTIVE} ? () : '-Debug');

Essentially, we're just turning on the debugging output only if the test harness is not active. Now when we run the tests, we get:

t/01app.t ............. ok   
t/02pod.t ............. skipped: set TEST_POD to enable this test
t/03podcoverage.t ..... skipped: set TEST_POD to enable this test
t/controller_Books.t .. ok   
t/view_HTML.t ......... ok   
All tests successful.
Files=5, Tests=8,  3 wallclock secs ( 0.04 usr  0.02 sys +  2.15 cusr  0.23 csys =  2.44 CPU)
Result: PASS

Much better. Now I can actually see other stuff, such as the fact that I'm skipping POD tests. Personally, I like to make sure that POD tests run all the time, as I'm likely to forget to set the environment variable. So let’s edit t/02pod.t and t/03podcoverage.t and delete this line from each:

plan skip_all => 'set TEST_POD to enable this test' unless $ENV{TEST_POD};

So what does that get us?

t/01app.t ............. ok   
t/02pod.t ............. ok     
t/03podcoverage.t ..... 1/6 
#   Failed test 'Pod coverage on MyApp::Controller::Books'
#   at /usr/local/lib/perl5/site_perl/5.10.1/Test/Pod/Coverage.pm line 126.
# Coverage for MyApp::Controller::Books is 50.0%, with 1 naked subroutine:
#   list

#   Failed test 'Pod coverage on MyApp::Controller::Root'
#   at /usr/local/lib/perl5/site_perl/5.10.1/Test/Pod/Coverage.pm line 126.
# Coverage for MyApp::Controller::Root is 66.7%, with 1 naked subroutine:
#   default
# Looks like you failed 2 tests of 6.
t/03podcoverage.t ..... Dubious, test returned 2 (wstat 512, 0x200)
Failed 2/6 subtests 
t/controller_Books.t .. ok   
t/view_HTML.t ......... ok   

Test Summary Report
-------------------
t/03podcoverage.t   (Wstat: 512 Tests: 6 Failed: 2)
  Failed tests:  2-3
  Non-zero exit status: 2
Files=5, Tests=25,  3 wallclock secs ( 0.05 usr  0.02 sys +  2.82 cusr  0.29 csys =  3.18 CPU)
Result: FAIL
Failed 1/5 test programs. 2/25 subtests failed.

Well that figures, doesn’t it? We added the list action to MyApp::Controller Books but never documented it. And for some reason, Catalyst creates the default action in MyApp::Controller::Root with no documentation. Such a shame. So let’s document those methods. Add this to t/lib/MyApp/Controller/Root.pm:

=head2 default

The default action. Just returns a 404/NOT FOUND error. Might want to update
later with a template to format the error like the rest of our site.

=cut

While there, I notice that the index action has a doc header, but nothing to actually describe what it does. Let’s fix that, too:

The default Catalyst action, which just displays the welcome message. This is
the "Yay it worked!" page. Consider changing to a real home page for our app.

Great. Now open t/lib/MyApp/Controller/Books.pm and document the list action:

=head2 list

Looks up all of the books in the system and executes a template to display
them in a nice table. The data includes the title, rating, and authors of each
book

=cut

Oh hey, look at that. There’s an index method that doesn’t do anything. And it has a POD header and no docs, too. So let’s document it:

The default method for the books controller. Currently just says that it
matches the request; we'll likely want to change it to something more
reasonable down the line.

Okay, so how do the tests look now?

t/01app.t ............. ok   
t/02pod.t ............. ok     
t/03podcoverage.t ..... ok   
t/controller_Books.t .. ok   
t/view_HTML.t ......... ok   
All tests successful.
Files=5, Tests=25,  3 wallclock secs ( 0.05 usr  0.02 sys +  2.82 cusr  0.31 csys =  3.20 CPU)
Result: PASS

Excellent! Now, the truth is that we didn’t document our templates, either. Test::Pod doesn’t cotton on to that fact because they're not installed like normal subroutines in the test classes. So it’s up to us to document them ourselves. (Note to self: Consider adding a module to test that all Template::Declare classes have docs for all of their templates.) I'll wait here while you do that.

All done? Great! I had actually planned to start testing the view next, but I think this is enough for today. Stay tuned for more testing goodness.