Dist::Zilla::LocaleTextDomain for Translators

Here’s a followup on my post about localizing Perl modules with Locale::TextDomain. Dist::Zilla::LocaleTextDomain was great for developers, less so for translators. A Sqitch translator asked how to test the translation file he was working on. My only reply was to compile the whole module, then install it and test it. Ugh.

Today, I released Dist::Zilla::LocaleTextDomain v0.85 with a new command, msg-compile. This command allows translators to easily compile just the file they’re working on and nothing else. For pure Perl modules in particular, it’s pretty easy to test then. By default, the compiled catalog goes into ./LocaleData, where convincing the module to find it is simple. For example, I updated the test sqitch app to take advantage of this. Now, to test, say, the French translation file, all the translator has to do is:

> dzil msg-compile po/fr.po
[LocaleTextDomain] po/fr.po: 155 translated messages, 24 fuzzy translations, 16 untranslated messages.

> LANGUAGE=fr ./t/sqitch foo
"foo" n'est pas une commande valide

I hope this simplifies things for translators. See the notes for translators for a few more words on the subject.

Localize Your Perl modules with Locale::TextDomain and Dist::Zilla

I've just released Dist::Zilla::LocaleTextDomain v0.80 to the CPAN. This module adds support for managing Locale::TextDomain-based localization and internationalization in your CPAN libraries. I wanted to make it as simple as possible for CPAN developers to do localization and to support translators in their projects, and Dist::Zilla seemed like the perfect place to do it, since it has hooks to generate the necessary binary files for distribution.

Starting out with Locale::TextDomain was decidedly non-intuitive for me, as a Perl hacker, likely because of its gettext underpinnings. Now that I've got a grip on it and created the Dist::Zilla support, I think it's pretty straight-forward. To demonstrate, I wrote the following brief tutorial, which constitutes the main documentation for the Dist::Zilla::LocaleTextDomain distribution. I hope it makes it easier for you to get started localizing your Perl libraries.

Localize Your Perl modules with Locale::TextDomain and Dist::Zilla

Locale::TextDomain provides a nice interface for localizing your Perl applications. The tools for managing translations, however, is a bit arcane. Fortunately, you can just use this plugin and get all the tools you need to scan your Perl libraries for localizable strings, create a language template, and initialize translation files and keep them up-to-date. All this is assuming that your system has the gettext utilities installed.

The Details

I put off learning how to use Locale::TextDomain for quite a while because, while the gettext tools are great for translators, the tools for the developer were a little more opaque, especially for Perlers used to Locale::Maketext. But I put in the effort while hacking Sqitch. As I had hoped, using it in my code was easy. Using it for my distribution was harder, so I decided to write Dist::Zilla::LocaleTextDomain to make life simpler for developers who manage their distributions with Dist::Zilla.

What follows is a quick tutorial on using Locale::TextDomain in your code and managing it with Dist::Zilla::LocaleTextDomain.

This is my domain

First thing to do is to start using Locale::TextDomain in your code. Load it into each module with the name of your distribution, as set by the name attribute in your dist.ini file. For example, if your dist.ini looks something like this:

name    = My-GreatApp
author  = Homer Simpson <homer@example.com>
license = Perl_5

Then, in you Perl libraries, load Locale::TextDomain like this:

use Locale::TextDomain qw(My-GreatApp);

Locale::TextDomain uses this value to find localization catalogs, so naturally Dist::Zilla::LocaleTextDomain will use it to put those catalogs in the right place.

Okay, so it's loaded, how do you use it? The documentation of the Locale::TextDomain exported functions is quite comprehensive, and I think you'll find it pretty simple once you get used to it. For example, simple strings are denoted with __:

say __ 'Hello';

If you need to specify variables, use __x:

say __x(
    'You selected the color {color}',
    color => $color

Need to deal with plurals? Use __n

say __n(
    'One file has been deleted',
    'All files have been deleted',

And then you can mix variables with plurals with __nx:

say __nx(
    'One file has been deleted.',
    '{count} files have been deleted.'",
    count => $num_files,

Pretty simple, right? Get to know these functions, and just make it a habit to use them in user-visible messages in your code. Even if you never expect to translate those messages, just by doing this you make it easier for someone else to come along and start translating for you.

The setup

Now you're localizing your code. Great! What's next? Officially, nothing. If you never do anything else, your code will always emit the messages as written. You can ship it and things will work just as if you had never done any localization.

But what's the fun in that? Let's set things up so that translation catalogs will be built and distributed once they're written. Add these lines to your dist.ini:


There are actually quite a few attributes you can set here to tell the plugin where to find language files and where to put them. For example, if you used a domain different from your distribution name, e.g.,

use Locale::TextDomain 'com.example.My-GreatApp';

Then you would need to set the textdomain attribute so that the LocaleTextDomain does the right thing with the language files:

textdomain = com.example.My-GreatApp

Consult the LocaleTextDomain configuration docs for details on all available attributes.

(Special note until this Locale::TextDomain patch is merged: set the share_dir attribute to lib instead of the default value, share. If you use Module::Build, you will also need a subclass to do the right thing with the catalog files; see "Installation" in Dist::Zilla::Plugin::LocaleTextDomain for details.)

What does this do including the plugin do? Mostly nothing. You might see this line from dzil build, though:

[LocaleTextDomain] Skipping language compilation: directory po does not exist

Now at least you know it was looking for something to compile for distribution. Let's give it something to find.

Initialize languages

To add translation files, use the msg-init command:

> dzil msg-init de
Created po/de.po.

At this point, the gettext utilities will need to be installed and visible in your path, or else you'll get errors.

This command scans all of the Perl modules gathered by Dist::Zilla and initializes a German translation file, named po/de.po. This file is now ready for your German-speaking public to start translating. Check it into your source code repository so they can find it. Create as many language files as you like:

> dzil msg-init fr ja.JIS en_US.UTF-8
Created po/fr.po.
Created po/ja.po.
Created po/en_US.po.

As you can see, each language results in the generation of the appropriate file in the po directory, sans encoding (i.e., no .UTF-8 in the en_US file name).

Now let your translators go wild with all the languages they speak, as well as the regional dialects. (Don't forget to colour your code with en_UK translations!)

Once you have translations and they're committed to your repository, when you build your distribution, the language files will automatically be compiled into binary catalogs. You'll see this line output from dzil build:

[LocaleTextDomain] Compiling language files in po
po/fr.po: 10 translated messages, 1 fuzzy translation, 0 untranslated messages.
po/ja.po: 10 translated messages, 1 fuzzy translation, 0 untranslated messages.
po/en_US.po: 10 translated messages, 1 fuzzy translation, 0 untranslated messages.

You'll then find the catalogs in the shared directory of your distribution:

> find My-GreatApp-0.01/share -type f

These binary catalogs will be installed as part of the distribution just where Locale::TextDomain can find them.

Here's an optional tweak: add this line to your MANIFEST.SKIP:


This prevents the po directory and its contents from being included in the distribution. Sure, you can include them if you like, but they're not required for the running of your app; the generated binary catalog files are all you need. Might as well leave out the translation files.

Mergers and acquisitions

You've got translation files and helpful translators given them a workover. What happens when you change your code, add new messages, or modify existing ones? The translation files need to periodically be updated with those changes, so that your translators can deal with them. We got you covered with the msg-merge command:

> dzil msg-merge
extracting gettext strings
Merging gettext strings into po/de.po
Merging gettext strings into po/en_US.po
Merging gettext strings into po/ja.po

This will scan your module files again and update all of the translation files with any changes. Old messages will be commented-out and new ones added. Just commit the changes to your repository and notify the translation army that they've got more work to do.

If for some reason you need to update only a subset of language files, you can simply list them on the command-line:

> dzil msg-merge po/de.po po/en_US.po
Merging gettext strings into po/de.po
Merging gettext strings into po/en_US.po
What's the scan, man

Both the msg-init and msg-merge commands depend on a translation template file to create and merge language files. Thus far, this has been invisible: they will create a temporary template file to do their work, and then delete it when they're done.

However, it's common to also store the template file in your repository and to manage it directly, rather than implicitly. If you'd like to do this, the msg-scan command will scan the Perl module files gathered by Dist::Zilla and make it for you:

> dzil msg-scan
gettext strings into po/My-GreatApp.pot

The resulting .pot file will then be used by msg-init and msg-merge rather than scanning your code all over again. This actually then makes msg-merge a two-step process: You need to update the template before merging. Updating the template is done by exactly the same command, msg-scan:

> dzil msg-scan
extracting gettext strings into po/My-GreatApp.pot
> dzil msg-merge
Merging gettext strings into po/de.po
Merging gettext strings into po/en_US.po
Merging gettext strings into po/ja.po
Ship It!

And that's all there is to it. Go forth and localize and internationalize your Perl apps!


My thanks to Ricardo Signes for invaluable help plugging in to Dist::Zilla, to Guido Flohr for providing feedback on this tutorial and being open to my pull requests, to David Golden for I/O capturing help, and to Jérôme Quelin for his patience as I wrote code to do the same thing as Dist::Zilla::Plugin::LocaleMsgfmt without ever noticing that it already existed.

Up for Adoption: SVN::Notify

I’ve kept my various Perl modules in a Subversion server run by my Bricolage support company, Kineticode, for many years. However, I’m having to shut down the server I’ve used for all my services, including Subversion, so I’ve moved them all to GitHub. As such, I no longer use Subversion in my day-to-day work.

It no longer seems appropriate that I maintain SVN::Notify. This has probably been my most popular modules, and I know that it’s used a lot. It’s also relatively stable, with few bug reports or complaints. Nevertheless, there certainly could be some things that folks want to add, like TLS support, I18N, and inline CSS.

Therefore, SVN::Notify is formally up for adoption. If you’re a Subversion users, it’s a great tool. Just look at this sample output. If you’d like to take over maintenance, make it even better, please get in touch. Leave a comment on this post, or @theory me on Twitter, or send an email.

PS: Would love it if someone also could take over activitymail, the CVS notification script from which SVN::Notify was derived — and which I have even less right to maintain, given that I haven’t used CVS in years.

DBIx::Connector Exception Handling Design

In response to a bug report, I removed the documentation suggesting that one use the catch function exported by Try::Tiny to specify an exception-handling function to the DBIx::Connector execution methods. When I wrote those docs, Try::Tiny's catch method just returned the subroutine. It was later changed to return an object, and that didn't work very well. It seemed a much better idea not to depend on an external function that could change its behavior when there is no direct dependency on Try::Tiny in DBIx::Connector. I removed that documentation in 0.43. So instead of this:

$conn->run(fixup => sub {
}, catch {

It now recommends this:

$conn->run(fixup => sub {
}, catch => sub {

Which frankly is better balanced anyway.

But in discussion with Mark Lawrence in the ticket, it has become clear that there's a bit of a design problem with this approach. And that problem is that there is no try keyword, only catch. The fixup in the above example does not try, but the inclusion of the catch implicitly makes it behave like try. That also means that if you use the default mode (which can be set via the mode method), then there will usually be no leading keyword, either. So we get something like this:

$conn->run(sub {
}, catch => sub {

So it starts with a sub {} and no fixup keyword, but there is a catch keyword, which implicitly wraps that first sub {} in a try-like context. And aesthetically, it's unbalanced.

So I'm trying to decide what to do about these facts:

  • The catch implicitly makes the first sub be wrapped in a try-type context but without a try-like keyword.
  • If one specifies no mode for the first sub but has a catch, then it looks unbalanced.

At one level, I'm beginning to think that it was a mistake to add the exception-handling code at all. Really, that should be the domain of another module like Try::Tiny or, better, the language. In that case, the example would become:

use Try::Tiny;
try {
    $conn->run(sub {
} catch {

And maybe that really should be the recommended approach. It seems silly to have replicated most of Try::Tiny inside DBIx::Connector just to cut down on the number of anonymous subs and indentation levels. The latter can be got round with some semi-hinky nesting:

try { $conn->run(sub {
}) } catch {

Kind of ugly, though. The whole reason the catch stuff was added to DBIx::Connector was to make it all nice and integrated (as discussed here). But perhaps it was not a valid tradeoff. I'm not sure.

So I'm trying to decide how to solve these problems. The options as I see them are:

  1. Add another keyword to use before the first sub that means "the default mode". I'm not keen on the word "default", but it would look something like this:

    $conn->run(default => sub {
    }, catch => sub {

    This would provide the needed balance, but the catch would still implicitly execute the first sub in a try context. Which isn't a great idea.

  2. Add a try keyword. So then one could do this:

    $conn->run(try => sub {
    }, catch => sub {

    This makes it explicit that the first sub executes in a try context. I'd also have to add companion try_fixup, try_ping, and try_no_ping keywords. Which are ugly. And furthermore, if there was no try keyword, would a catch be ignored? That's what would be expected, but it changes the current behavior.

  3. Deprecate the try/catch stuff in DBIx::Connector and eventually remove it. This would simplify the code and leave the responsibility for exception handling to other modules where it's more appropriate. But it would also be at the expense of formatting; it's just a little less aesthetically pleasing to have the try/catch stuff outside the method calls. But maybe it's just more appropriate.

I'm leaning toward #3, but perhaps might do #1 anyway, as it'd be nice to be more explicit and one would get the benefit of the balance with catch blocks for as long as they're retained. But I'm not sure yet. I want your feedback on this. How do you want to use exception-handling with DBIx::Connector? Leave me a comment here or on the ticket.

Template::Declare Explained

Today, Sartak uploaded a new version of Template::Declare. Why should you care? Well, in addition to the nice templating syntax, the new version features complete documentation. For everything.

This came about because I was trying to understand Template::Declare, with its occasional mentioning of “mixins” and “paths” and “roots,” and I just wasn’t getting it out of the existing documentation. Much of my confusion stemmed from how Catalyst::View::Template::Declare used Template::Declare. So I started peppering Jesse with questions and offering to fill in some gaps in the docs, and he was foolish enough to give me a commit bit.

I was particularly interested in the import_templates and alias methods. There was no documentation, and though there were tests, the two methods were so similar that I could barely tell the difference. I also wasn’t sure what the point was, though I had ideas. So I asked a bunch of questions and, through the discussion, I started to put the pieces together. I wrote more tests, and started refactoring things. I'd write some code, rename things, move them around, combine things, and then see who screamed. Jesse and Sartak were happy to run the Jifty test suite and even, I think, some Best Practical internal stuff to see what I broke. And then I'd think I got things just right and they would punch holes in it again.

But it finally came together, I understood what the methods were trying to do, and I documented the shit out of it. Then Sartak would copy-edit my docs, verifying my interpretations, and help me to understand where I got things wrong.

The new version features a glossary (useful for users of other templating packages) and extended examples that demonstrate XUL output, postprocessing, inheritance, and wrappers. And, most importantly, an explanation of aliasing (think delegation) and mixins (using the new name for import_templates: mix). I greatly appreciate the time the BPS team took to answer my noobish questions. And their patience as I ripped things apart and built them up again. The result is that, in addition to being better documented, the new version’s alias method creates build much better-performing and less memory-intensive aliases.

So why was I doing all this? Well, Catalyst::View::Template::Declare never seemed quite right to me. And in my discussions with the Jifty guys, it seemed clear that its use of Template::Declare was trying to alias kinda sorta, but not really. So as I tried to understand aliasing, I realized that a new view class was needed for catalyst. So I endeavored to really understand the features of Template::Declare so that I could do it right.

More news on that soon.

The upshot is that you have pretty nice control over mixing and aliasing Template::Declare templates into paths. For example, if you have this template class:

package MyApp::Templates::Util;
use base 'Template::Declare';
use Template::Declare::Tags;

template header => sub {
    my ($self, $args) = @_;
    head { title {  $args->{title} } };

template footer => sub {
    div {
        id is 'fineprint';
        p { 'Site contents licensed under a Creative Commons License.' }

You can mix those templates into your primary template class like so:

package MyApp::Templates::Main;
use base 'Template::Declare';
use Template::Declare::Tags;
use MyApp::Template::Util;
mix MyApp::Template::Util under '/util';

template content => sub {
    show '/util/header';
    body {
        h1 { 'Hello world' };
        show '/util/footer';

See how I've used the mixed in header and footer templates by referring to them under the /util path? This gives the invocation of the other templates the feel of calling Mason components or invoking Template Toolkit templates. You can use these templates like so:

Template::Declare->init( dispatch_to => ['MyApp::Templates::Main'] );
print Template::Declare->show('/content');

So MyApp::Templates::Main’s templates are in the “/” directory, so to speak, while the MyApp::Templates::Util’s templates are in the “/utils” subdirectory. Pretty cool, eh?

So with this understanding in place, I had a much better feel for Template::Declare, and could better think of it in normal templating terms. Now I'm this much closer to my ideal Catalyst development environment. More soon.

Pod::Simple 3.09 Hits the CPAN

I spent some time over the last few days helping Allison fix bugs and close tickets for a new version of Pod::Simple. I'm not sure how I convinced Allison to suddenly dedicate her day to fixing Pod::Simple bugs and putting out a new release. She must've had some studies or Parrot spec work she wanted to get out of or something.

Either way, it's got some useful fixes and improvements:

  • The XHTML formatter now supports tables of contents (via the poorly-named-but-consistent-with-the-HTML-formatter index parameter).

  • You can now reformat verbatim blocks via the strip_verbatim_indent parameter/method. Because you have to indent verbatim blocks (code examples) with one or more spaces, you end up with those spaces remaining in output. Just have a look at an example on search.cpan.org. See how the code in the Synopsis is indented? That's because it's indented in the POD. But maybe you don't want it to be indented in your final output. If not, you can strip out leading spaces via strip_verbatim_indent. Pass in the text to strip out:

    $parser->strip_verbatim_indent('  ');

    Or a code reference that figures out what to strip out. I'm fond of stripping based on the indentation of the first line, like so:

    $new->strip_verbatim_indent(sub {
          my $lines = shift;
          (my $indent = $lines->[0]) =~ s/\S.*//;
          return $indent;
  • You can now use the nocase parameter to Pod::Simple::PullParser to tell the parser to ignore the case of POD blocks when searching for author, title, version, and description information. This is a hack that Graham has used for a while on search.cpan.org, in part because I nagged him about my modules, which don't use uppercase =head1 text. Thanks Graham!

  • Fixed entity encoding in the XHTML formatter. It was failing to encode entities everywhere except code spans and verbatim blocks. Oops. It also now properly encodes E<sol> and E<verbar>, as well as numeric entities.

  • Multiparagraph items now work properly in the XHTML formatter, as do text items (definition lists).

  • A POD tag found inside a complex POD tag (e.g., C<<< C<foo> >>>) is now properly parsed as text and entities instead of a tag embedded in a tag (e.g., <foo>). This is in compliance with perlpod.

This last item is the only change I think might lead to problems. I fixed it in response to a bug report from Schwern. The relevant bit from the perlpod spec is:

A more readable, and perhaps more “plain” way is to use an alternate set of delimiters that doesn’t require a single “>” to be escaped. With the Pod formatters that are standard starting with perl5.5.660, doubled angle brackets (“<<” and “>>”) may be used if and only if there is whitespace right after the opening delimiter and whitespace right before the closing delimiter! For example, the following will do the trick:

C<< $a <=> $b >>

In fact, you can use as many repeated angle‐brackets as you like so long as you have the same number of them in the opening and closing delimiters, and make sure that whitespace immediately follows the last ’<’ of the opening delimiter, and immediately precedes the first “>” of the closing delimiter. (The whitespace is ignored.) So the following will also work:

C<<< $a <=> $b >>>
C<<<<  $a <=> $b     >>>>

And they all mean exactly the same as this:

C<$a E<lt>=E<gt> $b>

Although all of the examples use C<< >>, it seems pretty clear that it applies to all of the span tags ( B<< >>, I<< >>, F<< >>, etc.). So I made the change so that tags embedded in these “complex” tags, as comments in Pod::Simple call them, are not treated as tags. That is, all < and > characters are encoded.

Unfortunately, despite what the perlpod spec says (at least in my reading), Sean had quite a few pathological examples in the tests that expected POD tags embedded in complex POD tags to work. Here's an example:

L<<< Perl B<Error E<77>essages>|perldiag >>>

Before I fixed the bug, that was expected to be output as this XML:

<L to="perldiag" type="pod">Perl <B>Error Messages</B></L>

After the bug fix, it's:

<L content-implicit="yes" section="Perl B&#60;&#60;&#60; Error E&#60;77&#62;essages" type="pod">&#34;Perl B&#60;&#60;&#60; Error E&#60;77&#62;essages&#34;</L>

Well, there's a lot more crap that Pod::Simple puts in there, but the important thing to note is that neither the B<> nor the E<> is evaluated as a POD tag inside the L<<< >>> tag. If that seems inconsistent at all, just remember that POD tags still work inside non-complex POD tags (that is, when there is just one set of angle brackets:

L<Perl B<Error E<77>essages>|perldiag>

I'm pretty sure that few users were relying on POD tags working inside complex POD tags anyway. At least I hope so. I'm currently working up a patch for blead that updates Pod::Simple in core, so it will be interesting to see if it breaks anyone's POD. Here's to hoping it doesn't!

DBIx::Connector Updated

After much gnashing of teeth, heated arguments with @robkinon and @mst, lots of deep thesaurus spelunking, and three or four iterations, I finally came up with an an improved API for DBIx::Connector that I believe is straight-forward and easy to explain.

Following up on my post last week, I explored, oh I dunno, a hundred different terms for the various methods? I've never spent so much time on thesaurus.com in my life. Part of what added to the difficulty was that @mst seemed to think that there should actually be three modes for each block method: one that pings, one that doesn't, and one that tries again if a block dies and the connection is down. So I went from six to nine methods with that assertion.

What I finally came up with was to name the three basic methods run(), txn_run(), and svp_run(), and these would neither ping nor retry in the event of failure. Then I added variations on these methods that would ping and that would try to fix failures. I called these “ping runs” and “fixup runs,” respectively. It was the latter term, “fixup,” that had been so hard for me to nail down, as “retry” seemed to say that the method was a retry, while “fixup” more accurately reflects that the method would try to fix up the connection in the event of a failure.

Once I'd implemented this interface, I now had nine methods:

  • run()
  • txn_run()
  • svp_run()
  • ping_run()
  • txn_ping_run()
  • svp_ping_run()
  • fixup_run()
  • txn_fixup_run()
  • svp_fixup_run()

This worked great. Then I went about documenting it. Jesus Christ what a pain! I realized that all these similarly-named methods would require a lot of explanation. I duly wrote up said explanation, and just wasn't happy with it. It just felt to me like all the explanation made it too difficult to decide what methods to use and when. Such confusion would make the module less likely to be used -- and certainly less likely to be used efficiently.

So I went back to the API drawing board and, reflecting on @robkinyon's browbeating about decorating methods and @mst's coming to that conclusion as well, I finally came up with just three methods:

  • run()
  • txn()
  • svp()

For any one of these, you can call it by passing a block, of course:

$conn->txn( sub { $_->do('SELECT some_function()') } );

In addition, you can now have any one of them run in one of three modes: the default (no ping), “ping”, or “fixup”:

$conn->txn( fixup => sub { $_->do('SELECT some_function()') } );

It's much easier to explain the three methods in terms of how the block is transactionally encapsulated, as that's the only difference between them. Once that's understood, it's pretty easy to explain how to change the “connection mode” of each by passing in a leading string. It even looks pretty nice. I'm really happy with this

One thing that increased the difficulty in coming up with this API was that @mst felt that by default the methods should neither ping nor try to fix up a failure. I was resistant to this because it's not how Apache::DBI or connect_cached() work: they always ping. It turns out that DBIx::Class doesn't cache connections at all. I thought it had. Rather, it creates a connection and simply hangs onto it as a scalar variable. It handles the connection for as long as it's in scope, but includes no magic global caching. This reduces the action-at-a-distance issues common with caching while maintaining proper fork- and thread-safety.

At this point, I took a baseball bat to my desk.

Figuratively, anyway. I did at least unleash a mountain of curses upon @mst and various family relations. Because it took me a few minutes to see it: It turns out that DBIx::Class is right to do it this way. So I ripped out the global caching from DBIx::Connector, and suddenly it made much more sense not to ping by default -- just as you wouldn't ping if you created a DBI handle yourself.

DBIx::Conector is no longer a caching layer over the DBI. It's now a proxy for a connection. That's it. There is no magic, no implicit behavior, so it's easier to use. And because it ensures fork- and thread-safety, you can instantiate a connector and hold onto it for whenever you need it, unlike using the DBI itself.

And one more thing: I also added a new method, with(). For those who always want to use the same connection mode, you can use this method to create a proxy object that has a different default mode. (Yes, a proxy for a proxy for a database handle. Whatever!) Use it like this:

$conn->with('fixup')->run( sub { ... } );

And if you always want to use the same mode, hold onto the proxy instead of the connection object:

my $proxy = DBIx::Connector->(@args)->with('fixup');

# later ...
$proxy->txn( sub { ... } ); # always in fixup mode

So while fixup mode is no longer the default, as Tim requested, but it can optionally be made the default, as DBIx::Class requires. The with() method will also be the place to add other global behavioral modifications, such as DBIx::Class's auto_savepoint feature.

So for those of you who were interested in the first iteration of this module, my apologies for changing things so dramatically in this release (ripping out the global caching, deprecating methods, adding a new block method API, etc.). But I think that, for all the pain I went through to come up with the new API -- all the arguing on IRC, all the thesaurus spelunking -- that this is a really good API, easy to explain and understand, and easy to use. And I don't expect to change it again. I might improve exceptions (use objects instead of strings?) add block method exception handling (perhaps adding a catch keyword?), but the basics are finally nailed down and here to stay.

Thanks to @mst, @robkinyon, and @ribasushi, in particular, for bearing with me and continuing to hammer on me when I was being dense.

Suggest Method Names for DBIx::Connector

Thanks to feedback from Tim Bunce and Peter Rabbitson in a DBIx::Class bug report, I've been reworking DBIx::Connector's block-handling methods. Tim's objection is that the the feature of do() and txn_do() that executes the code reference a second time in the event of a connection failure can be dangerous. That is, it can lead to action-at-a-distance bugs that are hard to find and fix. Tim suggested renaming the methods do_with_retry() and txn_do_with_retry() in order to make explicit what's going on, and to have non-retry versions of the methods.

I've made this change in the repository. But I wasn't happy with the method names; even though they're unambiguous, they are also overly long and not very friendly. I want people to use the retrying methods, but felt that the long names make the non-retrying preferable to users. While I was at it, I also wanted to get rid of do(), since it quickly became clear that it could cause some confusion with the DBI's do() method.

I've been thesaurus spelunking for the last few days, and have come up with a few options, but would love to hear other suggestions. I like using run instead of do to avoid confusion with the DBI, but otherwise I'm not really happy with what I've come up with. There are basically five different methods (using Tim's suggestions for the moment):

run( sub {} )
Just run a block of code.
txn_run( sub {} )
Run a block of code inside a transaction.
run_with_retry( sub {} )
Run a block of code without pinging the database, and re-run the code if it throws an exception and the database turned out to be disconnected.
txn_run_with_rerun( sub {} )
Like run_with_retry(), but run the block inside a transaction.
svp_run( sub {} )
Run a block of code inside a savepoint (no retry for savepoints).

Here are some of the names I've come up with so far:

Run block Run in txn Run in savepoint Run with retry Run in txn with retry Retry Mnemonic
run txn_run svp_run runup txn_runup Run assuming the db is up, retry if not.
run txn_run svp_run run_up txn_run_up Same as above.
run txn_run svp_run rerun txn_rerun Run assuming the db is up, rerun if not.
run txn_run svp_run run::retry txn_run::retry :: means “with”

That last one is a cute hack suggested by Rob Kinyon on IRC. As you can see, I'm pretty consistent with the non-retrying method names; it's the methods that retry that I'm not satisfied with. A approach I've avoided is to use an adverb for the non-retry methods, mainly because there is no retry possible for the savepoint methods, so it seemed silly to have svp_run_safely() to complement do_safely() and txn_do_safely().

Brilliant suggestions warmly appreciated.

Database Handle and Transaction Management with DBIx::Connector

As part of my ongoing effort to wrestle Catalyst into working the way that I think it should work, I've just uploaded DBIx::Connector to the CPAN. See, I was using Catalyst::Model::DBI, but it turned out that I wanted to use the database handle in places other than the Catalyst parts of my app. I was bitching about this to mst on #catalyst, and he said that Catalyst::Model::DBI was actually a fork of DBIx::Class's handle caching, and quite out of date. I said, “But this already exists. It's called connect_cached().” I believe his response was, “OH FUCK OFF!”

So I started digging into what Catalyst::Model::DBI and DBIx::Class do to cache their database handles, and how it differs from connect_cached(). It turns out that they were pretty smart, in terms of checking to see if the process had forked or a new thread had been spawned, and if so, deactivating the old handle and then returning a new one. Otherwise, things are just cached. This approach works well in Web environments, including under mod_perl; in forking applications, like POE apps; and in plain Perl scripts. Matt said he'd always wanted to pull that functionality out of DBIx::Class and then make DBIx::Class depend on the external implementation. That way everyone could take advantage of the functionality, including people like me who don't want to use an ORM.

So I did it. Maybe it was crazy (mmmmm…yak meat), but I can now use the same database interface in the Catalyst and POE parts of my application without worry:

my $dbh = DBIx::Connector->connect(
    'dbi:Pg:dbname=circle', 'postgres', '', {
        PrintError     => 0,
        RaiseError     => 0,
        AutoCommit     => 1,
        HandleError    => Exception::Class::DBI->handler,
        pg_enable_utf8 => 1,


But it's not just database handle caching that I've included in DBIx::Connector; no, I've also stolen some of the transaction management stuff from DBIx::Class. All you have to do is grab the connector object which encapsulates the database handle, and take advantage of its txn_do() method:

my $conn = DBIx::Connector->new(@args);
$conn->txn_do(sub {
    my $dbh = shift;
    $dbh->do($_) for @queries;

The transaction is scoped to the code reference passed to txn_do(). Not only that, it avoids the overhead of calling ping() on the database handle unless something goes wrong. Most of the time, nothing goes wrong, the database is there, so you can proceed accordingly. If it is gone, however, txn_do() will re-connect and execute the code reference again. The cool think is that you will never notice that the connection was dropped -- unless it's still gone after the second execution of the code reference.

And finally, thanks to some pushback from mst, ribasushi, and others, I added savepoint support. It's a little different than that provided by DBIx::Class; instead of relying on a magical auto_savepoint attribute that subtly changes the behavior of txn_do(), you just use the svp_do() method from within txn_do(). The scoping of subtransactions is thus nicely explicit:

$conn->txn_do(sub {
    my $dbh = shift;
    $dbh->do('INSERT INTO table1 VALUES (1)');
    eval {
        $conn->svp_do(sub {
            shift->do('INSERT INTO table1 VALUES (2)');
            die 'OMGWTF?';
    warn "Savepoint failed\n" if $@;
    $dbh->do('INSERT INTO table1 VALUES (3)');

This transaction will insert the values 1 and 3, but not 2. If you call svp_do() outside of txn_do(), it will call txn_do() for you, with the savepoint scoped to the entire transaction:

$conn->svp_do(sub {
    my $dbh = shift;
    $dbh->do('INSERT INTO table1 VALUES (4)');
    $conn->svp_do(sub {
        shift->do('INSERT INTO table1 VALUES (5)');

This transaction will insert both 3 and 4. And note that you can nest savepoints as deeply as you like. All this is dependent on whether the database supports savepoints; so far, PostgreSQL, MySQL (InnoDB), Oracle, MSSQL, and SQLite do. If you know of others, fork the repository, commit changes to a branch, and send me a pull request!

Overall I'm very happy with this module, and I'll probably use it in all my Perl database projects from here on in. Perhaps later I'll build a model class on it (something like Catalyst::Model::DBI, only better!), but next up, I plan to finish documenting Template::Declare and writing some views with it. More on that soon.

Use Rubyish Blocks with Test::XPath

Thanks to the slick Devel::Declare-powered PerlX::MethodCallWithBlock created by gugod, the latest version of Test::XPath supports Ruby-style blocks. The Ruby version of assert_select, as I mentioned previously, looks like this:

assert_select "ol" { |elements|
  elements.each { |element|
    assert_select element, "li", 4

I've switched to the brace syntax for greater parity with Perl. Test::XPath, meanwhile, looks like this:

my @css = qw(foo.css bar.css);
$tx->ok( '/html/head/style', sub {
    my $css = shift @css;
    shift->is( './@src', $css, "Style src should be $css");
}, 'Should have style' );

But as of Test::XPath 0.13, you can now just use PerlX::MethodCallWithBlock to pass blocks in the Rubyish way:

use PerlX::MethodCallWithBlock;
my @css = qw(foo.css bar.css);
$tx->ok( '/html/head/style', 'Should have style' ) {
    my $css = shift @css;
    shift->is( './@src', $css, "Style src should be $css");

Pretty slick, eh? It required a single-line change to the source code. I'm really happy with this sugar. Thanks for the great hack, gugod!

The Future of SVN::Notify

This week, I imported pgTAP into GitHub. It took me a day or so to wrap my brain around how it's all supposed to work, with generous help from Tekkub. But I'm starting to get the hang of it, and I like it. By the end of the day, I had sent push requests to Test::More and Blosxom Plugins. I'm well on my way to being hooked.

One of the things I want, however, is SVN::Notify-type commit emails. I know that there are feeds, but they don't have diffs, and for however much I like using NetNewsWire to feed by political news addiction, it never worked for me for commit activity. And besides, why download the whole damn thing again, diffs and all (assuming that ever happens), for every refresh. Seems like a hell of a lot unnecessary network activity—not to mention actual CPU cycles.

So I would need a decent notification application. I happen to have one. I originally wrote SVN::Notify after I had already written activitymail, which sends noticies for CVS commits. SVN::Notify has changed a lot over the years, and now it's looking a bit daunting to consider porting it to Git.

However, just to start thinking about it, SVN::Notify really does several different things:

  • Fetches relevant information about a Subversion event.
  • Parses that information for a number of different outputs.
  • Writes the event information into one or more outputs (currently plain text or XHTML).
  • Constructs an email message from the outputs
  • Sends the email message via a specified method (sendmail or SMTP).

For the initial implementation of SVN::Notify, this made a lot of sense, because it was doing something fairly simple. It was designed to be extensible by subclassing (successfully done by SVN::Notify::Config and SVN::Notify::Mirror), and, later, by output filters, and that was about it.

But as I think about moving stuff to Git, and consider the weaknesses of extensibility by subclassing (it's just not pretty), I'm naturally rethinking this architecture. I wouldn't want to have to do it all over again should some future SCM system come along in the future. So, following from a private exchange with Martijn Van Beers, I have some preliminary thoughts on how a hypothetical SCM::Notify (VCS::Notify?) module might be constructed:

  • A single interface for fetching SCM activity information. There could be any number of implementations, just as long as they all provided the same interface. There would be a class for fetching information from Subversion, one for Git, one for CVS, etc.
  • A single interface for writing a report for a given transaction. Again, there could be any number of implementations, but all would have the same interface: taking an SCM module and writing output to a file handle.
  • A single interface for doing something with one or more outputs. Again, they can do things as varied as simply writing files to disk, appending to a feed, inserting into a database, or, of course, sending an email.
  • The core module would process command-line arguments to determine what SCM is being used any necessary contextual information and just pass it on to the appropriate classes.

In psedudo-code, what I'm thinking is something like this:

package SCM::Notify;

sub run {
    my $args = shift->getopt;
    my $scm  = SCM::Interface->new(
        scm      => $args->{scm} # e.g., "SVN" or "Git", etc.
        revision => $args->{revision},
        context  => $args->{context} # Might include repository path for SVN.

    my $report = SCM::Report->new(
        method => $opts->{method}, # e.g., SMTP, sendmail, Atom, etc.
        scm    => $scm,
        format => $args->{output}, # text, html, both, etc.
        params => $args->{params}, # to, from, subject, etc.


Then a report class just has to create report in the specified format or formats and do something with them. For example, a Sendmail report would put together a report as a multipart message with each format in a single part, and then deliver it via /sbin/sendmail, something like this:

package SCM::Report::Sendmail;

sub send {
    my $self = shift;
    my $fh = $self->fh;
    for my $format ( $self->formats ) {
        print $fh SCM::Format->new(
            format => $format,
            scm    => $self->scm,


So those are my rather preliminary thoughts. I think it'd actually be pretty easy to port the logic of this stuff over from SVN::Notify; what needs some more thought is what the command-line interface might look like and how options are passed to the various classes, since the Sendmail report class will require different parameters than the SMTP report class or the Atom report class. But once that's worked out in a way that can be handled neutrally, we'll have a much more extensible implementation that will be easy to add on to going forward.

Any suggestions for passing different parameters to different classes in a single interface? Everything needs to be able to be handled via command-line options and not be ugly or difficult to use.

So, you wanna work on this? :-)

SVN::Notify 2.70: Output Filtering and Character Encoding

I'm very pleased to announce the release of SVN::Notify 2.70. You can see an example of its colordiff output here. This is a major release that I've spent the last several weeks polishing and tweaking to get just right. There are quite a few changes, but the two most important are imporoved character encoding support and output filtering.

Improved Character Encoding Support

I've had a number of bug reports regarding issues with character encodings. Particularly for folks working in Europe and Asia, but really for anyone using multibyte characters in their source code and log messages (and we all do nowadays, don't we?), it has been difficult to find the proper incantation to get SVN::Notify to convert data from and to their proper encodings. Using a patch from Toshikazu Kinkoh as a starting-point, and with a lot of reading and experimentation, as well as regular and patient tests on Toshikazu's and Martin Lindhe's production systems, I think I've finally got it nailed down.

Now you can use the --encoding (formerly --charset), --svn-encoding, and --diff-encoding options—as well as --language—to get SVN::Notify to do the right thing. As long as your Subversion server's OS supports an appropriate locale, you should be golden (mine is old, with no UTF-8 locales :\). And if all else fails, you can still set the $LANG environment variable before executing svnnotify.

There is actually a fair bit to know about encodings to get it to work properly, but if you use UTF-8 throughout and your OS supports UTF-8 locales, you shouldn't have to do anything. You might have to set --language in order to get it to use the proper locale. See the new documentation of the encoding support for all the details. And if you still have problems, please do let me know.

Output Filtering

Much sexier is the addition of output filtering in SVN::Notify 2.70. I got pretty tired of getting feature requests for what are essentially formatting modifications, such as this one requesting support for KDE-style keyword support. I myself was using Trac wiki syntax in commit messages on a recent project and wanted to see them converted to HTML for messages output by SVN::Notify::HTML::ColorDiff.

So I finally sat down and gave some though on how to implement a simple plugin architecture for SVN::Notify. When I realized that it was generally just formatting that people wanted, it became simpler: I just needed a way to allow folks to write simple output filters. The solution I came up with was to just use Perl. Output filters are simply subroutines named for the kind of output they filter. They live in perl packages. That's it.

For example, say that your developers write their commit log messages in Textile, and rather than receive them stuck inside <pre> tags, you'd like them converted to HTML. It's simple. Just put this code in a Perl module file:

package SVN::Notify::Filter::Textile;
use Text::Textile ();

sub log_message {
    my ($notifier, $lines) = @_;
    return $lines unless $notify->content_type eq 'text/html';
    return [ Text::Textile->new->process( join $/, @$lines ) ];

Put the file, SVN/Notify/Filter/Textile.pm somewhere in a Perl library directory. Then use the new --filter option to svnnotify to put it to work:

svnnotify -p "$1" -r "$2" --handler HTML::ColorDiff --filter Textile

Yep, that's it! SVN::Notify will find the filter module, load it, register its filtering subroutine, and then call it at the appropriate time. Of course, there are a lot of things you can filter; consult the complete documentation for all of the details. But hopefully this gives you a flavor for how easy it is to write new filters for SVN::Notify. I'm hoping that all those folks who want featurs can now stop bugging me and writing their own filters to do the job, and uploading them to CPAN for all to share!

To get things started, I scratched my own itch, writing a Trac filter myself. The filter is almost as simple as the Textile example above, but I also spent quite a bit of time tweaking the CSS so that most of the Trac-generated HTML looks good. You can see an example right here. Thanks to a number of bug fixes in Text::Trac, as well as Trac-specific CSS added via a filter on CSS output, it works beautifully. If I'm feeling motivated in the next week or so, I'll create a separate CPAN distribution with just a Markdown filter and upload it. That will create a nice distriution example for folks to copy to creat their own. Or maybe someone on the Lazy Web Will do it for me! Maybe you?

I wish I'd thought to do this from the beginning; it would have saved me from having to add so many features/cruft to SVN::Notify over the years. Here's a quick list of the features that likely could have been implemented via filters instead of added to the core:

  • --user-domain: Combine the SVN username with a domain for the From header.
  • --add-header: Add a header to the message.
  • --reply-to: Add a specific header to the message.
  • SVN::Notify::HTML::ColorDiff: Frankly, looking back on it, I don't know why I didn't just put this support right into SVN::Notify::HTML. But even if I hadn't, it could have been implemented via filters.
  • --subject-prefix:: Modify the message subject.
  • --subject-cx: Add the commit context to the subject.
  • --strip-cx-regex: More subject context modification.
  • --no-first-line: Another subject filter.
  • --max-sub-length: Yet another!
  • --max-diff-length: A filter could truncate the diff, although this might be tricky with the HTML formatting.
  • --author-url: Modify the metadata section to add a link to the author URL.
  • --revision-url: Ditto for the revision URL.
  • --ticket-map: Filter the log message for various ticketing system strings to convert to URLs. This also encompasses the old --rt-url, --bugzilla-url, --gnats-url, and --jira-url options.
  • --header: Filter the beginning of the message.
  • --footer: Filter the end of the message.
  • --linkize: Filter the log message to convert URLs to links for HTML messages.
  • --css-url: Filter the CSS to modify it, or filter the start of the HTML to add a link to an external CSS URL.
  • --wrap-log: Reformat the log message for HTML.

Yes, really! That's about half the functionality right there. I'm glad that I won't have to add any more like that; filters are a much better way to go.

So download it, install it, write some filters, get your multibyte characters output properly, and enjoy! And as usual, send me your bug reports, but implement your own improvements using filters!

SVN::Notify 2.57 Supports Windows

So I finally got 'round to porting SVN::Notify to Windows. Version 2.57 is making is way to CPAN right now. The solution turned out to be dead simple: I just had to use a different form of piping open() on Windows, i.e., open FH, "$cmd|" instead of open FH, "-|"; exec($cmd);. It's silly, really, but it works. It really makes me wonder why -| and |- haven't been emulated on Windows. Whatever.

'Course the other thing I realized, after I made this change and all the tests pass, was that there is no equivalent of sendmail on Windows. So I added the --smtp option, so that now email can be sent to an SMTP server rather than to a local sendmail. I tested it out, and it seems to work, but I'd be especially interested to hear from folks using wide characters in their repositories: do they get printed properly to Net::SMTP's connection?

The whole list of changes in 2.57 (the output remains the same as in 2.56):

  • Finally ported to Win32. It was actually a simple matter of changing how command pipes are created.
  • Added --smtp option to enable sending messages to an SMTP server rather than to the local sendmail application. This is essential for Windows support.
  • Added --io-layer to the usage statement in svnnotify.
  • Fixed single-dash arguments in documentation so that they're all documented with a single dash in SVN::Notify.


SVN::Notify 2.56 Adds Alternative Formats

I've just uploaded SVN::Notify 2.56 to CPAN. Check a mirror near you! There have been a lot of changes since I last posted about SVN::Notify (for the 2.50 release), not least of which is that SourceForge has standardized on it for their Subversion roll out. W00t! The result was a couple of patches from SourceForge's David Burley to add headers and footers and to truncate diffs over a certain size. See the sample output for how it looks. Thanks, David!

The change I'm most pleased with in 2.56 is the addition of SVN::Notify::Alternative, based on a submission from Jukka Zitting. This new subclass allows you to actually combine a number of other subclasses into a single activity notification message. Why? Well, mainly because, though you might like to get HTML messages with colorized diffs, some mail clients might not care for the HTML. They would much prefer the plain text version.

SVN::Notify::Alternative allows you to have your cake and eat it too: send a single message with multipart/alternative sections for both HTML output and plain text. Plain text will always be used; to use HTML::ColorDiff with it, just do this:

svnnotify --repos-path "$1" --revision "$2" \
  --to developers@example.com --handler Alternative \
  --alternative HTML::ColorDiff --with-diff

This incantation will send an email with both the plain text and HTML::ColorDiff formats. If you look at it in Mail.app, you'll see the nice colorized format, and if you look at it in pine, you'll see the plain text.

For the curious, here are all of the changes since 2.50:

2.56 2006-04-04T23:16:37
  • Abstracted creation of the diff file handle into the new diff_handle() method.
  • Documented use of diff_handle() in the output() method.
  • Added optional second argument to output() to optionally suppress the output of the email headers. This argument is used by the new Alternative subclass.
  • Added SVN::Notify::Alternative, which allows multiple versions of a commit email to be sent, such as text/plain plus HTML. The multiple versions are assembled into a single email message using the multipart/alternative media type. For those who want HTML messages but must support users that can only read plain text or rely on archives that ignore HTML messages, this can be very useful. Based on an implementation by Jukka Zitting.
  • Fixed use_ok() tests that weren't running at all.
  • Added an extra newline to separate the file list from an inline diff in the plain text format where --with-diff has been specified.
  • Moved the multipart/mixed content-type header generation from output_headers() to output_content_type(), not only because this makes more sense, but also because it makes attachments behave better when using SVN::Notify::Alternative.
  • Documented accessors in SVN::Notify::HTML.
2.55 2006-04-03T23:11:11
  • Added the io-layer option to specify an alternate IO layer. Will be most useful for those with repositories containing text in multiple encodings, where it should be set to raw.
  • Fixed the context output in the subject for the --subject-cx option so that it's smarter about determining the longest common path. Reported by Max Horn.
  • No longer modifying the values of the to_regex_map hash, so as not to mess with folks who might be passing it as a hash to more than one call to new(). Reported by Darby Felton.
  • Added a meta http-equiv="content-type" tag to HTML output that includes the character set to help some clients in the proper display of the characters in an HTML email. I'm not sure if any clients actually need this help, but it certainly can't hurt!
  • Added the --css-url option to specify an alternate style sheet for HTML emails. SVN::Notify::HTML's own CSS is left in the email, as well, so the specified style sheet can just override the default, rather than have to style everything itself. Yes, it takes advantage of the cascading feature of cascading style sheets! Based on a suggestion by Steve James.
2.54 2006-03-06T00:33:42
  • Added /usr/bin to the list of paths searched for executables. Suggested by Nacho Barrientos.
  • Added --max-diff-length option. Patch from David Burley/SourceForge.
2.53 2006-02-24T21:30:48
  • Added header and footer attributes and command-line options to specify text to be put at the head and foot of each message. For HTML messages, the text will be escaped, unless it starts with <, in which case it will be assumed to be valid HTML and will therefore not be escaped. Either way, it will be output between <div> tags with the IDs header or footer as appropriate. Based on a patch from David Burley/SourceForge.
  • Fixed the executable-searching algorithm added in 2.52 to add .exe to the name of the executable being searched for if $^O eq 'MSWin32'.
  • Fixed encoding issues so that, under Perl 5.8 and later, the IO layer is set on file handles so as to encode input and decode output in the character set specified by the charset attribute. CPAN # 16050, reported by Michael Zehrer.
  • Added a second argument to all calls to encode_entities() in SVN::Notify::HTML and SVN::Notify::HTML::ColorDiff so that only '>'. '<', '&', and '"' are escaped.
  • Fixed a bug in the _find_exe() function that was attempting to modify a constant variable. Patch from John Peacock.
  • Turned the _find_exe() function into the find_exe() class method, since subclasses (such as SVN::Notify::Mirror) might want to use it.
2.52 2006-02-19T18:50:24
  • Now uses File::Spec->path to search for a validate sendmail or svnlook when they're not specified via their respective command-line options or environment variables. Suggested by Andreas Koenig. Not that they should probably be explicitly set anyway, as the $PATH environment variable tends to be non-existent when running under Apache.
2.51 2006-01-02T23:28:11
  • Fixed ColorDiff HTML to once again be valid XHTML 1.1.


SVN::Notify 2.50

SVN::Notify 2.50 is currently making its way to CPAN. It has quite a number of changes since I last wrote about it here, most significantly the slick new CSS treatment introduced in 2.47, provided by Bill Lynch. I really like the look, much better than it was before. Have a look at the SVN::Notify::HTML::ColorDiff output to see what I mean. Be sure to make your browser window rally narrow to see how all of the sections automatically get a nice horizontal scrollbar when they're wider than the window. Neat, eh? Check out the 2.40 output for contrast.

Here are all of the changes since the last version:

2.50 2005-11-10T23:27:22
  • Added --ticket-url and --ticket-regex options to be used by those who want to match ticket identifers for systems other than RT, Bugzilla, GNATS, and JIRA. Based on a patch from Andrew O'Brien.
  • Removed bogus use lib line put into Makefile.PL by a prerelease version of Module::Build.
  • Fixed HTML tests to match either ' or &#39;, since HTML::Entities can be configured differently on different systems.
2.49 2005-09-29T17:26:14
  • Now require Getopt::Long 2.34 so that the --to-regex-map option works correctly when it is used only once on the command-line.
2.48 2005-09-06T19:14:35
  • Swiched from <span class="add"> and <span class="rem"> to <ins> and <del> elements in SVN::Notify::HTML::ColorDiff in order to make the markup more semantic.
2.47 2005-09-03T18:54:43
  • Fixed options tests to work correctly with older versions of Getopt::Long. Reported by Craig McElroy.
  • Slick new CSS treatment used for the HTML and HTML::ColorDiff emails. Based on a patch from Bill Lynch.
  • Added --svnweb-url option. Based on a patch from Ricardo Signes.
2.46 2005-05-05T05:22:54
  • Added support for Copied files to HTML::ColorDiff so that they display properly.
2.45 2005-05-04T20:38:18
  • Added support for links to the GNATS bug tracking system. Patch from Nathan Walp.
2.44 2005-03-18T06:10:01
  • Fixed Name in POD so that SVN::Notify's POD gets indexed by search.cpan.org. Reported by Ricardo Signes.
2.43 2004-11-24T18:49:40
  • Added --strip-cx-regex option to strip out parts of the context from the subject. Useful for removing parts of the file names you might not be interested in seeing in every commit message.
  • Added --no-first-line option to omit the first sentence or line of the log message from the subject. Useful in combination with the --subject-cx option.
2.42 2004-11-19T18:47:20
  • Changed Files to Paths in hash returned by file_label_map() since directories can be listed as well as files.
  • Fixed SVN::Notify::HTML so that directories listed among the changed paths are not links.
  • Requiring Module::Build 0.26 to make sure that the installation works properly. Reported by Robert Spier.