Just a Theory

Black lives matter

Posts about Testing

Why Test Databases?

I’m going to be doing some presentations on testing for database administrators. I’ve been really excited to be working on pgTAP, and have started using it extensively to write tests for a client with a bunch of PostgreSQL databases. But as I start to push the idea of testing within the PostgreSQL community, I’m running into some surprising resistance.

I asked a major figure in the community about this, someone who has expressed skepticism in my past presentations. He feels that it’s hard to create a generic testing framework. He told me:

Well, you are testing for bugs, and bugs are pretty specific in where they appear. Writing the tests is 90% of the job; writing the infrastructure is minor. If the infrastructure has limitations, which all do, you might as well write that extra 10% too.

I have to say that this rather surprised me. I guess I just thought that everyone was on board with the idea of testing. The PostgreSQL core, after all, has a test suite. But the idea that one writes test to test for bugs seems like a major misconception about testing: I don’t write tests to test bugs (that is what regression tests are for, but there is much more to testing than regression tests); I write tests to ensure consistent and correct behavior in my code as development continues over time.

It has become clear to me that I need to reframe my database testing presentations to emphasize not the how to go about testing; I think that pgTAP does a pretty good job of making it a straight-forward process (at least as straight-forward as when writing tests in Ruby or Perl, for example). What I have to address first is the why of testing. I need to convince database administrators that testing is an essential tool in their kits, and the way to do that is to show them why it’s essential.

With this in mind, I asked, via Twitter, why should database people test their databases? I got some great answers (and, frankly, the 140 character limit of Twitter made them admirably pithy, which is a huge help):

  • chromatic_x: @theory, because accidents that happen during tests are much easier to recover from than accidents that happen live.
  • caseywest: @Theory When you write code that’s testable you tend to write better code: cleaner interfaces, reusable components, composable pieces.
  • depesz_com: @Theory testing prevents repeating mistakes.
  • rjbs: @Theory The best ROI for me is “never ship the same bug twice.”
  • elein: @Theory trust but verify
  • cwinters: @Theory so they can change the system without feeling like they’re on a suicide mission
  • caseywest: @Theory So they can document how the system actually works.
  • hanekomu: @Theory Regression tests - to see whether, after having changed something here, something else over there falls over.
  • robrwo: @Theory Show them a case where bad data is inserted into/deleted from database because constraints weren’t set up.

Terrific ideas there. I thank you, Tweeps. But I ask here, too: Why should we write tests against our databases? Leave a comment with your (brief!) thoughts.

And thank you!

Looking for the comments? Try the old layout.

pgTAP 0.16 in the Wild

I’ve been writing a lot tests for a client in pgTAP lately. It’s given me a lot to think about in terms of features I need and best practices in writing tests. I’m pleased to say that, overall, it has been absolutely invaluable. I’m doing a lot of database refactoring, and having the safety of solid test coverage has been an absolute godsend. pgTAP has done a lot to free me from worry about the effects of my changes, as it ensures that everything about the databases continue to just work.

Of course, that’s not to say that I don’t scew up. There are times when my refactorings have introduced new bugs or incompatibilities; after all, the tests I write of existing functionality extend only so far as I can understand that functionality. But as such issues come up, I just add regression tests, fix the issues, and move on, confident in the knowledge that, as long as the tests continue to be run regularly, those bugs will never come up again. Ever.

As a result, I’ll likely be posting a bit on best practices I’ve found while writing pgTAP tests. As I’ve been writing them, I’ve started to find the cow paths that help me to keep things sane. Most helpful is the large number of assertion functions that pgTAP offers, of course, but there are a number of techniques I’ve been developing as I’ve worked. Some are better than others, and still others suggest that I need to find other ways to do things (you know, when I’m cut-and-pasting a lot, there must be another way, though I’ve always done a lot of cut-and-pasting in tests).

In the meantime, I’m happy to announce the release of pgTAP 0.16. This version includes a number of improvements to the installer (including detection of Perl and TAP::Harness, which are required to use the included pg_prove test harness app. The installer also has an important bug fix that greatly increases the chances that the os_name() function will actually know the name of your operating system. And of course, there are new test functions:

  • has_schema() and hasnt_schema(), which test for the presence of absence of a schema
  • has_type() and hasnt_type(), which test for the presence and absence of a data type, domain, or enum
  • has_domain() and hasnt_domain(), which test for the presence and absence of a data domain
  • has_enum() and hasnt_enum(), which test for the presence and absence of an enum
  • enum_has_lables() which tests that an enum has an expected list of labels

As usual, you can download the latest release from pgFoundry. Visit the pgTAP site for more information and for documentation.

Looking for the comments? Try the old layout.

pgTAP 0.14 Released

I’ve just released pgTAP 0.14. This release focuses on getting more schema functions into your hands, as well as fixing a few issues. Changes:

  • Added SET search_path statements to uninstall_pgtap.sql.in so that it will work properly when TAP is installed in its own schema. Thanks to Ben for the catch!
  • Added commands to drop pg_version() and pg_version_num() touninstall_pgtap.sql.in.
  • Added has_index(), index_is_unique(), index_is_primary(), is_clustered(), and index_is_type().
  • Added os_name(). This is somewhat experimental. If you have uname, it’s probably correct, but assistance in improving OS detection in the Makefile would be greatly appreciated. Notably, it does not detect Windows.
  • Made ok() smarter when the test result is passed as NULL. It was dying, but now it simply fails and attaches a diagnostic message reporting that the test result was NULL. Reported by Jason Gordon.
  • Fixed an issue in check_test() where an extra character was removed from the beginning of the diagnostic output before testing it.
  • Fixed a bug comparing name[]s on PostgreSQL 8.2, previously hacked around.
  • Added has_trigger() and trigger_is().
  • Switched to pure SQL implementations of the pg_version() and pg_version_num() functions, to simplify including pgTAP in module distributions.
  • Added a note to README.pgtap about the need to avoid pg_typeof() and cmp_ok() in tests run as part of a distribution.

Enjoy!

Looking for the comments? Try the old layout.

pgTAP 0.12 Released

In anticipation of my PostgreSQL Conference West 2008 talk on Sunday, I’ve just released pgTAP 0.12. This is a minor release with just a few tweaks:

  • Updated plan() to disable warnings while it creates its tables. This means that plan() no longer send NOTICE messages when they run, although tests still might, depending on the setting of client_min_messages.
  • Added hasnt_table(), hasnt_view(), and hasnt_column().
  • Added hasnt_pk(), hasnt_fk(), col_isnt_pk(), and col_isnt_fk().
  • Added missing DROP statements to uninstall_pgtap.sql.in.

I also have an idea to add functions that return the server version number (and each of the version number parts) and an OS string, to make testing things on various versions of PostgreSQL and on various operating systems a lot simpler.

I think I’ll also spend some time in the next few weeks on an article explaining exactly what pgTAP is and why you’d want to use it. Provided, of course, I can find the tuits for that.

Looking for the comments? Try the old layout.

pgTAP 0.11 Released

So I’ve just released pgTAP 0.11. I know I said I wasn’t going to work on it for a while, but I changed my mind. Here’s what’s changed:

  • Simplified the tests so that they now load test_setup.sql instead of setting a bunch of stuff themselves. Now only test_setup.sql needs to be created from test_setup.sql.in, and the other .sql files depend on it, meaning that one no longer has to specify TAPSCHEMA for any make target other than the default.
  • Eliminated all uses of E'' in the tests, so that we don’t have to process them for testing on 8.0.
  • Fixed the spelling of ON_ROLLBACK in the test setup. Can’t believe I had it with one L in all of the test files before! Thanks to Curtis “Ovid” Poe for the spot.
  • Added a couple of variants of todo() and skip(), since I can never remember whether the numeric argument comes first or second. Thanks to PostgreSQL’s functional polymorphism, I don’t have to. Also, there are variants where the numeric value, if not passed, defaults to 1.
  • Updated the link to the pgTAP home page in pgtap.sql.in.
  • TODO tests can now nest.
  • Added todo_start(), todo_end(), and in_todo().
  • Added variants of throws_ok() that test error messages as well as error codes.
  • Converted some more tests to use check_test().
  • Added can() and can_ok().
  • Fixed a bug in check_test() where the leading whitespace for diagnostic messages could be off by 1 or more characters.
  • Fixed the installcheck target so that it properly installs PL/pgSQL into the target database before the tests run.

Now I really am going to do some other stuff for a bit, although I do want to see what I can poach from Epic Test. And I do have that talk on pgTAP next month. So I’ll be back with more soon enough.

Looking for the comments? Try the old layout.

Introducing pgTAP

So I started working on a new PostgreSQL data type this week. More on that soon; in the meantime, I wanted to create a test suite for it, and wasn’t sure where to go. The only PostgreSQL tests I’ve seen are those distributed with Elein Mustain’s tests for the email data type she created in a PostgreSQL General Bits posting from a couple of years ago. I used the same approach myself for my GTIN data type, but it was rather hard to use: I had to pay very close attention to what was output in order to tell the description output from the test output. It was quite a PITA, actually.

This time, I started down the same path, then then started thinking about Perl testing, where each unit test, or “assertion,” in the xUnit parlance, triggers output of a single line of information indicating whether or not a test succeeded. It occurred to me that I could just run a bunch of queries that returned booleans to do my testing. So my first stab looked something like this:

\set ON_ERROR_STOP 1
\set AUTOCOMMIT off
\pset format unaligned
\pset tuples_only
\pset pager
\pset null '[NULL]'

SELECT foo() = 'bar';
SELECT foo(1) = 'baz';
SELECT foo(2) = 'foo';

The output looked like this:

% psql try -f ~/Desktop/try.sql
t
t
t

Once I started down that path, and had written ten or so tests, It suddenly dawned on me that the Perl Test::More module and its core ok() subroutine worked just like that. It essentially just depends on a boolean value and outputs text based on that value. A couple minutes of hacking and I had this:

CREATE TEMP SEQUENCE __tc__;
CREATE OR REPLACE FUNCTION ok ( boolean, text ) RETURNS TEXT AS $$
    SELECT (CASE $1 WHEN TRUE THEN '' ELSE 'not ' END) || 'ok'
        || ' ' || NEXTVAL('__tc__')
        || CASE $2 WHEN '' THEN '' ELSE COALESCE( ' - ' || $2, '' ) END;
$$ LANGUAGE SQL;

I then rewrote my test queries like so:

\echo 1..3
SELECT ok( foo() = 'bar'   'foo() should return "bar"' );
SELECT ok( foo(1) = 'baz', 'foo(1) should return "baz"' );
SELECT ok( foo(2) = 'foo', 'foo(2) should return "foo"' );

Running these tests, I now got:

% psql try -f ~/Desktop/try.sql
1..3
ok 1 - foo() should return "bar"
ok 2 - foo(1) should return "baz"
ok 3 - foo(2) should return "foo"

And, BAM! I had the beginning of a test framework that emits pure TAP output.

Well, I was so excited about this that I put aside my data type for a few hours and banged out the rest of the framework. Why was this exciting to me? Because now I can use a standard test harness to run the tests, even mix them in with other TAP tests on any project I might work on. Just now, I quickly hacked together a quick script to run the tests:

use TAP::Harness;

my $harness = TAP::Harness->new({
    timer   => $opts->{timer},
    exec    => [qw( psql try -f )],
});

$harness->runtests( @ARGV );

Now I’m able to run the tests like so:

% try ~/Desktop/try.sql        
/Users/david/Desktop/try........ok   
All tests successful.
Files=1, Tests=3,  0 wallclock secs ( 0.00 usr  0.00 sys +  0.01 cusr  0.00 csys =  0.01 CPU)
Result: PASS

Pretty damn cool! And lest you wonder whether such a suite of TAP-emitting test functions is suitable for testing SQL, here are a few examples of tests I’ve written:

-- Plan the tests.
SELECT plan(4);

-- Emit a diagnostic message for users of different locales.
SELECT diag(
    E'These tests expect LC_COLLATE to be en_US.UTF-8,\n'
    || 'but yours is set to ' || setting || E'.\n'
    || 'As a result, some tests may fail. YMMV.'
)
    FROM pg_settings
    WHERE name = 'lc_collate'
    AND setting <> 'en_US.UTF-8';

SELECT is( 'a', 'a', '"a" should = "a"' );
SELECT is( 'B', 'B', '"B" should = "B"' );

CREATE TEMP TABLE try (
    name lctext PRIMARY KEY
);

INSERT INTO try (name)
VALUES ('a'), ('ab'), ('â'), ('aba'), ('b'), ('ba'), ('bab'), ('AZ');

SELECT ok( 'a' = name, 'We should be able to select the value' )
    FROM try
    WHERE name = 'a';

SELECT throws_ok(
    'INSERT INTO try (name) VALUES (''a'')',
    '23505',
    'We should get an error inserting a lowercase letter'
);

-- Finish the tests and clean up.
SELECT * FROM finish();

As you can see, it’s just SQL. And yes, I have ported most of the test functions from Test::More, as well as a couple from Test::Exception.

So, without further ado, I’d like to introduce pgTAP, a lightweight test framework for PostgreSQL implemented in PL/pgSQL and PL/SQL. I’ll be hacking on it more in the coming days, mostly to get a proper client for running tests hacked together. Then I think I’ll see if pgFoundry is interested in it.

Whaddya think? Is this something you could use? I can see many uses, myself, not only for testing a custom data type as I develop it, but also custom functions in PL/pgSQL or PL/Perl, and, heck, just regular schema stuff. I’ve had to write a lot of Perl tests to test my database schema (triggers, rules, functions, etc.), all using the DBI and being very verbose. Being able to do it all in a single psql script seems so much cleaner. And if I can end up mixing the output of those scripts in with the rest of my unit tests, so much the better!

Anyway, feedback welcome. Leave your comments, suggestions, complaints, patches, etc., below. Thanks!

Looking for the comments? Try the old layout.

What’s With These CPAN-Testers Failures?

So I just learned about and subscribed to the CPAN-Testers feed for my modules. There appear to be a number of odd failures. Take this one. It says,“Can’t locate Algorithm/Diff.pm,” despite the fact that I have properly specified the requirement for Text::Diff, which itself properly requires Algorithm::Diff.. Is this an instance of CPAN.pm or CPANPLUS not following all prerequisites, or what?

Or take this failure. It says, “[CP_ERROR] [Mon Sep 5 09:32:08 2005] No such module ‘mod_perl’ found on CPAN”. Yet here it is. Maybe the CPANPLUS indexer has a bug? Or are people’s configurations just horked? Or am I just doing something braindead?

Opinions welcomed.

Looking for the comments? Try the old layout.

10001

% make devtest
PERL_DL_NONLAZY=1 /usr/bin/perl inst/runtests.pl -d
t/Bric/Test/Runner....ok                                                     
        1801/10001 skipped: various reasons
All tests successful, 1801 subtests skipped.
Files=1, Tests=10001, 247 wallclock secs (110.30 cusr +  8.90 csys = 119.20 CPU)

How ‘bout them apples?

Looking for the comments? Try the old layout.

How Does DateTime Ignore CORE::GLOBAL::time?

For the life of me, I can’t figure out why this test fails. time returns my overridden time, and DateTime just calls scalar time, so I would expect it to work. But DateTime appears to be somehow getting the time for CORE::time, instead.

#!/usr/bin/perl -w

use strict;
use DateTime;
use Test::More tests => 1;

BEGIN {
    *CORE::GLOBAL::time = sub () { CORE::time() };
}

my $epoch = time;
sleep 1;
try();

sub try {
    no warnings qw(redefine);
    local *CORE::GLOBAL::time = sub () { $epoch };
    is( DateTime->now->epoch, time );
}

Anyone got any bright ideas? This is a reasonably well-known technique, so I’m sure that I must be overlooking something obvious.

Looking for the comments? Try the old layout.

Test.Simple 0.20 Released

It gives me great pleasure, not to mention a significant amount of pride, to announce the release of Test.Simple 0.20. There are quite a few changes in this release, including a few that break backwards compatibility—but only you’re writing your own Test.Builder-based test libraries (and I don’t think anyone has done so yet) or if you’re subclassing Test.Harness (and there’s only one of those, that I know of).

The biggest change is that Test.Harness.Browser now supports pure .js script files in addition to the original .html files. This works best in Firefox, of course, but with a lot of help from Pawel Chmielowski (“prefiks” on #jsan), it also works in Safari, Opera, and IE 6 (though not in XP Service Pack 2; I’ll work on that after I get my new PC in the next few days). The trick with Firefox (and hopefully other browsers in the future, since it feels lest hackish to me), is that it uses the DOM to create a new HTML document in a hidden iframe, and that document loads the .js. Essentially, it just uses the DOM to mimic the structure of a typical .html test file. For the other browsers, the hidden iframe uses XMLHttpRequest to load and eval the .js test file. Check it out (verbosely)!

I think that this will greatly enhance the benefits of Test.Simple, as it makes writing tests really simple. All you have to do is create a single .html file that looks something like this:

<html>
<head>
    <script type="text/javascript" src="./lib/JSAN.js"></script>
</head>
<body>
<script type="text/javascript">
    new JSAN("../lib").use("Test.Harness.Browser");
    new Test.Harness.Browser('./lib/JSAN.js').encoding('utf-8').runTests(
        'foo.js',
        'bar.js'
    );
</script>
</body>
</html>

In fact, that’s pretty much exactly what Test.Simple’s new harness looks like, now that I’ve moved all of the old tests into .js files (although there is still a simpl.html test file to ensure that .html test files still work!). Here I’m using JSAN to dynamically load the libraries I need. I use it to load Test.Harness.Browser (which then uses it to load Test.Harness), and then I tell the Test.Harness.Browser object where it is so that it can load it for each .js script. The test script itself can then look something like this:

new JSAN('../lib').use('Test.Simple');
plan({tests: 3});
ok(1, 'compile');
ok(1);
ok(1, 'foo');

And that’s it! Just use JSAN to load the appropriate test library or libraries and go! I know that JSAN is already loaded because Test.Harness.Browser loads it for me before it loads and runs my .js test script. Nice, eh?

Of course, you don’t have to use JSAN to run pure .js tests, although it can be convenient. Instead, you can just pass a list of files to the harness to have it load them for each test:

<html>
<head>
    <script type="text/javascript" src="./lib/Test/Harness.js"></script>
    <script type="text/javascript" src="./lib/Test/Harness/Browser.js"></script>
</head>
<body>
<script type="text/javascript">
    new Test.Harness.Browser(
        'lib/Test/Builder.js',
        'lib/Test/More.js',
        '../lib/MY/Library.js'
    ).runTests(
        'foo.js',
        'bar.js'
    );
</script>
</body>
</html>

This example tells Test.Harness.Browser to load Test.Builder and Test.More, and then to run the tests in foo.js and bar.js. No need for JSAN if you don’t want it. The test script is exactly the same as the above, only without the line with JSAN loading your test library.

Now, as I’ve said, this is imperfect. It’s surprisingly difficult to get browsers to do this properly, and it’s likely that it won’t work at all in many browsers. I’m sure that I broke the Directory harness, too. Nevertheless, I’m pleased that I got as many to work as I did (again, with great thanks to Pawel Chmielowski for all the great hacks), but at this point, I’ll probably only focus on adding support for Windows XP Service Pack 2. But as you might imagine, I’d welcome patches from anyone who wants to add support for other browsers.

There are a lot of other changes in this release. Here’s the complete list:

  • Fixed verbose test output to be complete in the harness in Safari and IE.
  • Fixed plan() so that it doesn’t die if the object is passed with an unknown attribute. This can happen when JS code has altered Object.prototype (shame on it!). Reported by Rob Kinyon.
  • Fixed some errors in the POD documentation.
  • Updated JSAN to 0.10.
  • Added documentation for Test.Harness.Director, complements of Gordon McCreight.
  • Fixed line endings in Konqueror and Opera and any other browser other than MSIE that supports document.all. Reported by Rob Kinyon.
  • Added support to Test.Harness.Browser for .js test files in addition to .html test files. Thanks to Pawel Chmielowski for helping me to overcome the final obstacles to actually getting this feature to work.
  • Added missing variable declarations. Patch from Pawel Chmielowski.
  • More portable fetching of the body element in Test.Builder. Based on patch from Pawel Chmielowski.
  • Added an encoding attribute to Test.Harness. This is largely to support pure JS tests, so that the browser harness can set up the proper encoding for the script elements it creates.
  • Added support for Opera, with thanks to Pawel Chmielowski.
  • Fixed the output from skipAll in the test harness.
  • Fixed display of summary of failed tests after all tests have been run by the browser harness. They are now displayed in a nicely formatted table without a NaN stuck where it doesn’t belong.
  • COMPATIBILITY CHANGE: The browser harness now outputs failure information bold-faced and red. This required changing the output argument to the outputResults() method to an object with two methods, pass() and fail(). Anyone using Test.Harness.outputResults() will want to make any changes accordingly.
  • COMPATIBILITY CHANGE: new Test.Builder() now always returns a new Test.Builder object instead of a singleton. If you want the singleton, call Test.Builder.instance(). Test.Builder.create() has been deprecated and will be removed in a future release. This is different from how Perl’s Test::Builder works, but is more JavaScript-like and sensible, so we felt it was best to break things early on rather than later. Suggested by Bob Ippolito.
  • Added beginAsync() and endAsync() functions to Test.More. Suggested by Bob Ippolito.

As always, feedback/comments/suggestions/winges welcome. Enjoy!

Looking for the comments? Try the old layout.

Test.Simple 0.11 Released

I’m pleased to announce the release of Test.Simple 0.11. This release fixes a number of bugs in the framework’s IE and Safari support, adds JSAN support, and includes an experimental harness for Macromedia^Adobe Director. You can download it from JSAN, and all future releases will be available on JSAN. See the harness in action here (or verbosely!). This release has the following changes:

  • The browser harness now works more reliably in IE and Safari.
  • Fixed syntax errors in tests/harness.html that IE and Safari care about.
  • Various tweaks for Director compatibility from Gordon McCreight.
  • Removed debugging output from Test.More.canOK().
  • Fixed default output so that it doesn’t re-open a closed browser document when there is a “test” element.
  • Added experimental Test.Harness.Director, complements of Gordon McCreight. This harness is subject to change.
  • Added Test.PLATFORM, containing a string defining the platform. At the moment, the only platforms listed are “browser” or “director”.
  • Added support for Casey West’s JSAN. All releases of Test.Simple will be available on JSAN from now on.
  • The iframe in the browser harness is no longer visible in IE. Thanks to Marshall Roch for the patch.
  • Noted addition of Test.Harness and Test.Harness.Browser in the README.

I’ve been getting more and more excited about Casey West’s work on JSAN. It gets better every day, and I hope that it attracts a lot of hackers who want to distribute open source JavaScript modules. You should check it out! I’ve been working on a Perl module to simplify the creation of JSAN distributions. Look for it on CPAN soonish.

Looking for the comments? Try the old layout.

Test.Simple 0.10 Released

I’m pleased to announce the first beta release of Test.Simple, the port of Test::Builder, Test::Simple, Test::More, and Test::Harness to JavaScript. You can download it here. See the harness in action here (or verbosely!). This release has the following changes:

  • Changed the signature of functions passed to output() and friends to accept a single argument rather than a list of arguments. This allows custom functions to be much simpler.
  • Added support for Macromedia Director. Patch from Gordon McCreight.
  • Backwards Incompatibility change: moved all “modules” into Test “namespace” by using an object for the Test namespace and assigning the Build() constructor to it. See http://xrl.us/fy4h for a description of this approach.
  • Fixed the typeOf() class method in Test.Builder to just return the value returned by the typeof operator if the class constructor is an anonymous function.
  • Changed for (var in in someArray) to for (var i = 0; i < someArray.length; i++) for iterating through arrays, since the former method will break if someone has changed the prototype for arrays. Thanks to Bob Ippolito for the spot!
  • The default output in browsers is now to append to an element with the ID “test” or, failing that, to use document.write. The use of the “test” element allows output to continue to be written to the browser window even after the document has been closed. Reported by Adam Kennedy.
  • Changed the default endOutput() method to be the same as the other outputs.
  • Backwards incompatibility change: Changed semantics of plan() so that it takes an object for an argument. This allows multiple commands to be passed, where the object attribute keys are the command and their values are the arguments.
  • Backwards incompatibility change: Changed the “no_plan”, “skip_all”, and “no_diag” (in Test.More only) options to plan() to their studlyCap alternatives, “noPlan”, “skipAll”, and “noDiag”. This makes them consistent with JavaScript attribute naming convention.
  • Added beginAsync() and endAsync() methods to Test.Builder to allow users to put off the ending of a script until after asynchronous tests have been run. Suggested by Adam Kennedy.
  • Backwards incompatibility change: Changed the signature for the output() method and friends to take only a single anonymous function as its argument. If you still need to call a method, pass an anonymous function that calls it appropriately.
  • Changed handling of line-endings to be browser-specific. That is, if the current environment is Internet Explorer, we use “\r” for line endings. Otherwise we use “\n”. Although IE properly interprets \n as a line ending when it’s passed to document.write(), it doesn’t when passed to a DOM text node. No idea why not.
  • Added a browser harness. Now you can run all of your tests in a single browser window and get a summary at the end, including a list of failed tests and the time spent running the tests.
  • Fixed calls to warn() in Test.More.
  • Output to the browser now causes the window to scroll when the length of the output is greater than the height of the window.
  • Backwards incompatibility change: Changed all instances of “Ok” to “OK”. So this means that the new Test.More function names are canOK(), isaOK(), and cmpOK(). Sorry ‘bout that, won’t happen again.
  • Ported to Safari (though there are issues–see the “Bugs” section of the Test.Harness.Browser docs for details).

Obviously this is a big release. I bumped up the version number because there are a fair number of backwards incompatibilities. But I’m reasonably confident that they wont’ change so much in the future. And with the addition of the harness, it’s getting ready for prime time!

Next up, I’ll finish porting the test from Test::Harness (really!) and add support for JSAN (look for a JSAN announcement soon). But in the meantime, feedback, bug reports, kudos, complaints, etc.warmly welcomed!

Looking for the comments? Try the old layout.

The Purpose of TestSimple

In response to my TestSimple 0.03 announcement, Bill N1VUX asked a number of important questions about TestSimple’s purpose. Since this is just an alpha release and I’m still making my way though the port, I haven’t created a project page or started to promote it much, yet. Once I get the harness part written and feel like it’s stable and working well, I’ll likely start to promote it as widely as possible.

But yes, TestSimple is designed for doing unit testing in JavaScript. Of all the JavaScript I’ve seen, I’ve never seen any decent unit tests. People sometimes write a few integration tests to test them in browsers, but don’t write many tests that would ensure that their JS code runs where they need it to run and that would give them the freedom to refactor. There is generally very little coverage in JavaScript tests—if there are any tests at all.

While it’s true that JavaScript is nearly always an embedded language (but see also Rhino and SpiderMonkey), that doesn’t mean that one doesn’t write a lot of JavaScript functions or classes that need testing. It’s also important to have a lot of tests you can run in various browsers (just as you can run tests of a Perl module on various OSs and various versions of Perl). I started the port because, as I was learning JavaScript, I realized that I didn’t want to write much without writing tests. The purpose is to ensure the quality of JavaScript code as it goes through the development process. And the freedom to refactor that tests offer is very important for my personal development style.

So, to answer N1VUX’s questions:

Is the point to integration test the whole distributed front-ends of applications from the (EcmaScript compliant) browser?

Yes. And I expect that, as people write more JavaScript applications, there will be a lot more code that needs testing. However, unlike other JavaScript testing frameworks I’ve seen (all based on the xUnit framework), my suite doesn’t assume that tests will be run in a browser. Ultimately, I’d like to be able to automate tests outside of browsers—or by scripting browsers. But in the meantime, it will produce TAP-compliant output in the browser, and I plan on implementing a harness that will run all of your test scripts in a single browser window and output the results, just like Test::Harness does for Perl modules on the command-line.

Or to unit test the client-side java-script as an entity, mocking the server??

Yes, I would like to be able to do that eventually. I will likely mock the server by mocking XMLHttpRequest and Microsoft.XMLHTTP to return XML strings that can be used for testing. Stuff like that.

Or is the point to Unit test JavaScript functions in the browser in vitro, mocking everything outside the current function?

If need be, yes. The point is that you have the freedom to do it the way that makes sense to your particular project. The testing framework itself doesn’t care where it’s run or how.

Or is this for Unit Testing of the Presentation Layer on the server from the Browser? (If so, how can a JavaScript arrange to Mock the Model layer?)

Probably not, but again, it depends on the model of your application. I’ve tried to make no assumptions, just provide you with the tools to easily test your application. I hope and expect that others will start creating the appropriate JavaScript libraries to start mocking whatever other APIs one needs to fully unit test JavaScript code that relies on such libraries.

Or is it more likely for driving Integration Testing from the browser with the scripting simplicity we’ve come to love, without resorting to OLE-stuffing the browser from Perl?

That was my initial impetus, yes.

Digging into the TAR file (which I normally wouldn’t do before peaking at the web copy of the POD2HTML’s) I think I understand it’s for unit-testing JavaScript classes, which I hadn’t even considered. (JavaScript has classes that fancy? *shudder* no wonder pages don’t work between browser versions.) I hope I don’t need to do that.

Yes, as more people follow Google’s lead, more and more applications will be appearing in JavaScript, and they will require a lot of code. JavaScript already supports a prototype-based object-orientation scheme, and if JavaScript 2.0 ever becomes a reality (please, please, please, please please!), then we’ll have real classes, namespaces, and even import, include and use! Testing will become increasingly important as more organizations come to rely on growing amounts of production JavaScript code.

Looking for the comments? Try the old layout.

TestSimple 0.03 Released

I’m pleased to announce the third alpha release of TestSimple, the port of Test::Builder, Test::Simple, and Test::More to JavaScript. You can download it here. This release has the following changes:

  • Removed trailing commas from 3 arrays, since IE6/Win doesn’t like them. And now everything works in IE. Thanks to Marshall Roch for tracking down and nailing this problem.
  • isNum() and isntNum() in TestBuilder.js now properly convert values to numbers using the global Number() function.
  • CurrentTest is now properly initialized to 0 when creating a new TestBuilder object.
  • Values passed to like() and unlike() that are not strings now always fail to match the regular expression.
  • plan() now outputs better error messages.
  • isDeeply() now works better with circular and repeating references.
  • diag() is now smarter about converting objects to strings before outputting them.
  • Changed isEq() and isntEq() to use simple equivalence checks (== and !=, respectively) instead of stringified comparisons, as the equivalence checks are more generally useful. Use cmpOk(got, "eq", expect) to explicitly compare stringified versions of values.
  • TestBuilder.create() now properly returns a new TestBuilder object instead of the singleton.
  • The useNumbers(), noHeader(), and noEnding() accessors will now properly assign a non-null value passed to them.
  • The arrays returned from summary() and details() now have the appropriate structures.
  • diag() now always properly adds a “#” character after newlines.
  • Added output(), failureOutput(), todoOutput(), warnOutput(), and endOutput() to TestBuilder to set up function reference to which to send output for various purposes. The first three each default to document.write, while warnOutput() defaults to window.alert and endOutout() defaults to the appendData function of a text element inside an element with the ID “test” or, failing that, window.write.
  • todo() and todoSkip() now properly add “#” after all newlines in their messages.
  • Fixed line ending escapes in diagnostics to be platform-independent. Bug reported by Marshall Roch.
  • Ported about a third of the tests from Test::Simple (which is how I caught most of the above issues). The remaining test from Test::Simple will be ported for the next release.

Many thanks to Marshall Roch for help debugging issues in IE.

Now, there is one outstanding issue I’d like to address before I would consider this production ready (aside from porting all the remaining tests from Test::Simple): how to harness the output. Harnessing breaks down into a number of issues:

How to run all tests in a single window. I might be able to write a build script that builds a single HTML file that includes all the other HTML files in iframes or some such. But then will each run in its own space without stomping on the others? And how would the harness pull in the results of each? It might be able to go into each of its children and grab the results from the TestBuilder objects…

More Feedback/advice/insults welcome!

Looking for the comments? Try the old layout.

TestSimple 0.02 Released

I’m pleased to announce the second alpha release of TestSimple, the port of Test::Builder, Test::Simple, and Test::More to JavaScript. You can download it here. This release has the following changes:

  • Removed eqArray() and eqAssoc() functions from TestMore per suggestion from Michael Schwern. The problem is that these are not test functions, and so are inconsistent with the way the rest of the functions work. isDeeply() is the function that users really want.
  • Changed eqSet() to isSet() and made it into a real test function.
  • Implemented skip(), todoSkip(), and todo(). These are a bit different than the Perl originals originals so read the docs!
  • Implemented skipAll() and BAILOUT() using exceptions and an exception handler installed in window.onerror.
  • The final message of a test file now properly outputs in the proper place. Tests must be run inside an element its “id” attribute set to “test”, such as <pre id="test">. The window.onload handler will find it and append the final test information.
  • Implemented skipRest() in TestBuilder and TestMore. This method is stubbed out the Perl original, but not yet implemented there!

The only truly outstanding issues I see before I would consider these “modules” ready for production use are:

  • Figure out how to get at file names and line numbers for better diagnostic messages. Is this even possible in JavaScript?
  • Decide where to send test output, and where to allow other output to be sent. Test::Builder clones STDERR and STDOUT for this purpose. We’ll probably have to do it by overriding document.write()>, but it’d be good to allow users to define alternate outputs (tests may not always run in a browser, eh?). Maybe we can use an output object? Currently, a browser and its DOM are expected to be present. I could really use some advice from real JavaScript gurus on this one.
  • Write tests!

Feedback/advice/insults welcome!

Looking for the comments? Try the old layout.

Apache::Util::escape_html() Doesn’t Like Perl UTF-8 Strings

I got bit by a bug with Apache::Util’s escape_html() function in mod_perl

  1. It seems that it doesn’t like Perl’s Unicode encoded strings! This patch demonstrates the issue (be sure that your editor understands utf-8):

    — modperl/t/net/perl/util.pl.~1.18.~ Sun May 25 03:54:08 2003+++ modperl/t/net/perl/util.pl Thu Sep 9 19:38:40 2004@@ -74,6 +74,25 @@ #print $esc_2; test ++$i, $esc eq $esc_2;++# Make sure that escape_html() understands multibyte characters.+my $utf8 = ‘<專輯>';+my $esc_utf8 = ‘<專輯>';+my $test_esc_utf8 = Apache::Util::escape_html($utf8);+test ++$i, $test_esc_utf8 eq $esc_utf8;+#print STDERR “Compare ‘$test_esc_utf8’\n to ‘$esc_utf8’\n”;++eval { require Encode };+unless ($@) {+ # Make sure escape_html() properly handles strings with Perl’s+ # Unicode encoding.+ $utf8 = Encode::decode_utf8($utf8);+ $esc_utf8 = Encode::decode_utf8($esc_utf8);+ $test_esc_utf8 = Apache::Util::escape_html($utf8);+ test ++$i, $test_esc_utf8 eq $esc_utf8;+ #print STDERR “Compare ‘$test_esc_utf8’\n to ‘$esc_utf8’\n”;+}+ use Benchmark; =pod

If I enable the print statements and look at the log, I see this:

Compare '<專輯>'
     to '<專輯>'
Compare '<å°è¼¯>'
     to '<專輯>'

The first escape appears to work correctly, but when I decode the string to Perl’s Unicode representation, you can see how badly escape_html() munges the text!

Curiously, both tests fail, although the first conversion appears to be correct. This could be due to the behavior of eq, though I’m not sure why. But it’s the second test that’s the more interesting, since it really screws things up.

Looking for the comments? Try the old layout.