Requiring Reworked Sqitch Changes

I recently discovered a mildly annoying bug in Sqitch, the Git-inspired database schema change management app I’ve been working on for the past year. One of its key features is the ability to “rework” changes. For example, if you have a change that defines a function change_password(), and discover sometime after release that it has a bug (maybe the hashing algorithm is too weak), you can “rework” it – essentially modify it in place – and save some headaches. Check out the “In Place Changes” section of the (PostgreSQL, SQLite, Oracle, or MySQL (coming soon) tutorials for detailed examples of how it works.

The bug was about what happens when one adds a new change that depends on a reworked change, but just specifies it by name, such as change_password:

sqitch add meow --requires change_password

This added the change fine, but at deploy time, Sqitch complained that there were multiple instances of a change in the database. Of course, that’s true, because change_password will have been deployed twice: once for the original version, and the second time for the reworked version. This was inconsistent with how it looked up changes in the plan, where it would just return the first instance of a change in the plan. So I changed it so that dependency lookups in the database also return the first instance of the change. I believe this makes sense, because if you require change_password, without specifying which instance you want, you probably want any instance, starting with the earliest.

But what if you actually need to require a specific instance of a reworked change? Let’s say your plan looks like this:

users
widgets
change_pass
sleep
@v1.0

work_stuff
change_pass [change_pass@v1.0]

The third change is change_pass, and it has been reworked in the sixth change (requiring the previous version, as of the @v1.0 tag). If you want to require any instance of change_pass, you specify it as in the previous example. But what if there were changes in the reworked version that you require? You might try to require it as-of the symbolic tag @HEAD:

sqitch add meow --requires change_password@HEAD

This means, “Require the last instance of change_password in the plan.” And that would work…until you reworked it again, then it would be updated to point at the newer instance. Sqitch will choke on that, because you can’t require changes that appear later in the plan.

So what we have to do instead is add a new tag after the second instance of change_pass:

sqitch tag rehash

Now the plan will look like this:

users
widgets
change_pass
sleep
@v1.0

work_stuff
change_pass [change_pass@v1.0]
@rehash

Now we can identify exactly the instance we need by specifying that tag:

sqitch add meow --requires change_password@rehash

Meaning “The instance of change_password as of @rehash.” If what you really needed was the first version, you can specify the tag that follows it:

sqitch add meow --requires change_password@v1.0

Which, since it is the first instance is the same as specifying no tag at all. But if there were, say, four instances of change_pass, you can see how it might be important to use tags to specify specific instances for dependencies.

For what it’s worth, this is how to get around the original bug referenced above: just specify which instance of the change to require by using a tag that follows that instance, and the error should go away.

Notes on Upcoming Sqitch Improvements

I was traveling last week, and knowing I would be offline a fair bit, not to mention seriously jet-lagged, I put my hacking efforts into getting MySQL support into Sqitch. I merged it in yesterday; check out the tutorial if you’re interested in it. I expect to release v0.980 with the MySQL support in a couple of weeks; testing and feedback would most appreciated.

There is a caveat, though: it requires MySQL v5.6.4. So if you’re stuck with an older MySQL, it won’t work. There are two reasons to require v5.6.4:

  • The microsecond precision support in DATETIME values, added in v5.6.4. This makes it much easier to keep things in the proper order (deployments usually take less than a second).
  • The SIGNAL functionality, introduced in v5.5. This allows the schema to mock a check constraint in the Sqitch database, as well as make it much easier to write verify tests (as described in the tutorial and figured out on StackOverflow).

But if you can afford to take advantage of a relatively modern MySQL, give it a shot!

The next release also makes a backwards-incompatible change to the SQLite engine: the default Sqitch database is no longer $db_dir/$db_name-sqitch.$suffix, but $db_dir/sqitch.$suffix. In other words, if you were deploying to a db named /var/db/myapp.db, Sqitch previously kept its metadata in /var/db/myapp-sqitch.db, but now will keep it in /var/db/sqitch.db. This is to make it more like the other engines (MySQL defaults to a database named “sqitch”, and Postgres and Oracle default to a schema named “sqitch”).

It’s also useful if you use the SQLite ATTACHDATABASE command to manage multiple database files in a single project. In that case, you will want to use the same metadata file for all the databases. Keep them all in the same directory with the same suffix and you get just that with the default sqitch database.

If you’d like it to have a different name, use sqitch config core.sqlite.sqitch_db $name to configure it. This will be useful if you don’t want to use the same Sqitch database to manage multiple databases, or if you do, but they live in different directories.

I haven’t released this change yet, and I am not a big-time SQLite user. So if this makes no sense, please comment on this issue. It’ll be a couple of weeks before I release v0.980, so there is time to reverse if if there’s consensus that it’s a bad idea.

But given another idea I’ve had, I suspect it will be okay. The idea is to expand on the concept of a Sqitch “target” by giving it its own command and configuration settings. Basically, it would be sort of like Git remotes: use URIs to specify database connection and parameter info (such as the sqitch database name for SQLite). These can be passed to database-touching commands, such as deploy, revert, log, and the like. They can also be given names and stored in the configuration file. The upshot is that it would enable invocations such as

sqitch deploy production
sqitch log qa
sqitch status pg://localhost/flipr?sqitch_schema=meta

See the GitHub issue for a fuller description of this feature. I’m certain that this would be useful at work, as we have a limited number of databases that we deploy each Sqitch project to, and it’s more of a PITA for my co-workers to remember to use different values for the --db-host, --db-user, --db-name and friends options. The project itself would just store the named list of relevant deployment targets.

And it alleviates the issue of specifying a different Sqitch database on SQLite or MySQL, as one can just create a named target that specifies it in the URI.

Not sure when I will get to this feature, though. I think it would be great to have, and maybe iovation would want me to spend some time on it in the next couple of months. But it might also be a great place for someone else to get started adding functionality to Sqitch.

Oh, and before I forget: it looks like Sqitch might soon get CUBRID support, too, thanks to Ștefan Suciu. Stay tuned!

Sqitch Mail List

Just a quick post to announce that I’ve set up a Sqitch Google Group. I’ve been getting a lot more email about it lately, and a fair bit of it should be shared more generally. So if you’re interested in Sqitch, sign up! (Don’t like web forums? Me neither. Feel free to subscribe by email, instead.)

Agile Database Development Tutorial

I gave a tutorial at PGCon a couple weeks back, entitled “Agile Database Development with Git, Sqitch, and pgTAP.” It went well, I think. The Keynote document and an exported PDF have been posted on PGCon.org, and also uploaded to Speaker Deck. And embedded below, too. Want to follow along? Clone the tutorial Git repository and follow along. Here’s the teaser:

Hi, I’m David. I like to write database apps. Just as much as I like to write web apps. (Maybe more!) How? Not by relying on bolted-on, half-baked database integration tools like migrations, I’ll tell you that!. Instead, I make extensive use of best-of-breed tools for source control (Git), database unit testing (pgTAP), and database change management and deployment (Sqitch). If you’d like to get as much pleasure out of database development as you do application development, join me for this tutorial. We’ll develop a sample application using the processes and tools I’ve come to depend on, and you’ll find out whether they might work for you. Either way, I promise it will at least be an amusing use of your time.

Sqitch on Oracle

I found myself with a little unexpected time at work recently, and since we use Oracle (for a few more months), I decided to port Sqitch. Last night, I released v0.970 with full support for Oracle. I did the development against an 11.2 VirtualBox VM, though I think it should work on 10g, as well.

Sqitch is available from the usual locations. For Oracle support, you’ll need the Instant Client, including SQL*Plus. Make sure you have $ORACLE_HOM set and you’ll be ready to install. Via CPAN, it’s

cpan install App::Sqitch DBD::Oracle

Via Homebrew:

brew tap theory/sqitch
brew install sqitch-oracle

Via ActiveState PPM, install ActivePerl, then run:

ppm install App-Sqitch DBD-Oracle
PGCon 2013

There are a few other minor tweaks and fixed in this release; check the release notes for details.

Want more? I will be giving a half-day tutorial, entitled “Agile Database Development,” on database development with Git, Sqitch, and pgTAP at on May 22 PGCon 2013 in Ottawa, Ontario. Come on up!

Sqitch: Now with SQLite Support

This week I released Sqitch v0.961. There are a number of great new features v0.95x, including the beginning of two features I’ve had in mind since the beginning: VCS integration and support for multiple databases.

First the VCS integration. This comes in the form of the new checkout command, which automatically makes database changes for you when you change VCS branches. Say you have two branches, “widgets” and “big-fix”, and that their Sqitch plans diverge. If you’re in the “widgets” branch and want to switch to “big-fix”, just run

sqitch checkout big-fix

Sqitch will look at the “big-fix” plan, figure out the last change in common with “widgets”, and revert to it. Then it checks out “big-fix” and deploys. That’s it. Yes, you could do this yourself, but do you really remember the last common change between the two branches? Do you want to take the time to look for it, then revert, check out the new branch, and deploy? This is exactly the sort of common developer task that Sqitch aims to take the pain out of, and I’m thrilled to provide it.

You know what’s awesome, though? This feature never occurred to me. I didn’t come up with it, and didn’t implement it. No, it was dreamt up and submitted in a pull request by Ronan Dunklau. I have wanted VCS integration since the beginning, but had yet to get ‘round to it. Now Ronan has jumpstarted it. A million thanks!

One downside: it’s currently Git-only. I plan to add infrastructure for supporting multiple VCSes, probably with Git and Subversion support to begin with. Watch for that in v0.970 in the next couple months.

The other big change is the addition of SQLite support alongside the existing PostgreSQL support. Fortunately, I was able to re-use nearly all the code, so the SQLite adapter is just a couple hundred lines long. For the most part, Sqitch on SQLite works just like on PostgreSQL. The main difference is that Sqitch stores its metadata in a separate SQLite database file. This allows one to use a single metadata file to maintain multiple databases, which can be important if you use multiple databases as schemas pulled into a single connection via ATTACH DATABASE.

Curious to try it out? Install Sqitch from CPAN or via the Homebrew Tap and then follow the new Sqitch SQLite tutorial.

Of the multitude of other Changes, one other bears mentioning: the new plan command. This command is just like log, except that it shows what is in the plan file, rather than what changes have been made to the database. This can be useful for quickly listing what’s in a plan, for example when you need to remember the names of changes required by a change you’re about to add. The --oneline option is especially useful for this functionality. An example from the tutorial’s plan:

> sqitch plan --oneline
In sqitch.plan
6238d8 deploy change_pass
d82139 deploy insert_user
7e6e8b deploy pgcrypto
87952d deploy delete_flip @v1.0.0-dev2
b0a951 deploy insert_flip
834e6a deploy flips
d0acfa deploy delete_list
77fd99 deploy insert_list
1a4b9a deploy lists
0acf77 deploy change_pass @v1.0.0-dev1
ec2dca deploy insert_user
bbb98e deploy users
ae1263 deploy appschema

I personally will be using this a lot, Yep, scratching my own itch here. What itch do you have to scratch with Sqitch?

In related news, I’ll be giving a tutorial at PGCon next month, entitled “Agile Database Development”. We’ll be developing a database for a web application using Git for source code management, Sqitch for database change management, and pgTAP for unit testing. This is the stuff I do all day long at work, so you can also think of it as “Theory’s Pragmatic approach to Database Development.” See you there?

Sqitch on Windows (and Linux, Solaris, and OS X)

Thanks to the hard-working hamsters at the ActiveState PPM Index, Sqitch is available for installation on Windows. According to the Sqitch PPM Build Status, the latest version is now available for installation. All you have to do is:

  1. Download and install ActivePerl
  2. Open the Command Prompt
  3. Type ppm install App-Sqitch

As of this writing, only PostgreSQL is supported, so you will need to install PostgreSQL.

But otherwise, that’s it. In fact, this incantation works for any OS that ActivePerl supports. Here’s where you can find the sqitch executable on each:

  • Windows: C:\perl\site\bin\sqitch.bat
  • Mac OS X: ~/Library/ActivePerl-5.16/site/bin/sqitch (Or /usr/local/ActivePerl-5.16/site/bin if you run sudo ppm)
  • Linux: /opt/ActivePerl-5.16/site/bin/sqitch
  • Solaris/SPARC (Business edition-only): /opt/ActivePerl-5.16/site/bin/sqitch

This makes it easy to get started with Sqitch on any of those platforms without having to become a Perl expert. So go for it, and then get started with the tutorial!

Sqitch Homebrew Tap

If Sqitch is to succeed, it needs to get into the hands of as many people as possible. That means making it easy to install for people who are not Perl hackers and don’t want to deal with CPAN. The Sqitch Homebrew Tap is my first public stab at that. It provides a series of “Formulas” for Homebrew users to easily download, build, and install Sqitch and all of its dependencies.

If you are one of these lucky people, here’s how to configure the Sqitch tap:

brew tap theory/sqitch

Now you can install the core Sqitch application:

brew install sqitch

That’s it. Make sure it works:

> sqitch --version
sqitch (App::Sqitch) 0.953

It won’t do you much good without support for your database, though. Currently, there is a build for PostgreSQL. Note that this requires the Homebrew core PostgreSQL server:

brew install sqitch_pg

Sqitch hasn’t been ported to other database engines yet, but once it is, expect other formulas to follow. But if you use PostgreSQL (or just want to experiment with it), you’re ready to rock! I suggest following along the tutorial or taking in the latest iteration of the introductory presentation (video of an older version here).

My thanks to IRC user “mistym” for the help and suggestions in getting this going. My Ruby is pretty much rusted through, soI could not have done it without the incredibly responsive help!

Bootstrapping Bucardo Master/Master Replication

Let’s say you have a production database up and running and you want to set up a second database with Bucardo-powered replication between them. Getting a new master up and running without downtime for an existing master, and without losing any data, is a bit fiddly and under-documented. Having just figured out one way to do it with the forthcoming Bucardo 5 code base, I wanted to blog it as much for my own reference as for yours.

First, let’s set up some environment variables to simplify things a bit. I’m assuming that the database names and usernames are the same, and only the host names are different:

export PGDATABASE=widgets
export PGHOST=here.example.com
export PGHOST2=there.example.com
export PGSUPERUSER=postgres

And here are some environment variables we’ll use for Bucardo configuration stuff:

export BUCARDOUSER=bucardo
export BUCARDOPASS=*****
export HERE=here
export THERE=there

First, let’s create the new database as a schema-only copy of the existing database:

createdb -U $PGSUPERUSER -h $PGHOST2 $PGDATABASE
pg_dump -U $PGSUPERUSER -h $PGHOST --schema-only $PGDATABASE \
| psql -U $PGSUPERUSER -h $PGHOST2 -d $PGDATABASE

You might also have to copy over roles; use pg_dumpall --globals-only to do that.

Next, we configure Bucardo. Start by telling it about the databases:

bucardo add db $HERE$PGDATABASE dbname=$PGDATABASE host=$PGHOST user=$BUCARDOUSER pass=$BUCARDOPASS
bucardo add db $THERE$PGDATABASE dbname=$PGDATABASE host=$PGHOST2 user=$BUCARDOUSER pass=$BUCARDOPASS

Tell it about all the tables we want to replicate:

bucardo add table public.foo public.bar relgroup=myrels db=$HERE$PGDATABASE 

Create a multi-master database group for the two databases:

bucardo add dbgroup mydbs $HERE$PGDATABASE:source $THERE$PGDATABASE:source  

And create the sync:

bucardo add sync mysync relgroup=myrels dbs=mydbs autokick=0

Note autokick=0. This ensures that, while deltas are logged, they will not be copied anywhere until we tell Bucardo to do so.

And now that we know that any changes from here on in will be queued for replication, we can go ahead and copy over the data. The only caveat is that we need to disable the Bucardo triggers on the target system, so that our copying does not try to queue up. We do that by setting the session_replication_role GUC to “replica” while doing the copy:

pg_dump -U $PGSUPERUSER -h $PGHOST --data-only -N bucardo $PGDATABASE \
| PGOPTIONS='-c session_replication_role=replica' \
psql -U $PGSUPERUSER -h $PGHOST2 -d $PGDATABASE

Great, now all the data is copied over, we can have Bucardo copy any changes that have been made in the interim, as well as any going forward:

bucardo update sync mysync autokick=1
bucardo reload config

Bucardo will fire up the necessary syncs and copy over any interim deltas. And any changes you make to either system in the future will be copied to the other.

Sqitch: Trust, But Verify

New today: Sqitch v0.950. There are a few bug fixes, but the most interesting new feature in this release is the verify command, as well as the complementary --verify option to the deploy command. The add command has created test scripts since the beginning; they were renamed verify in v0.940. In v0.950 these scripts are actually made useful.

The idea is simply to test that a deploy script did what it was supposed to do. Such a test should make no assumptions about data or state other than that affected by the deploy script, so that it can be run against a production database without doing any damage. If it finds that the deploy script failed, it should die.

This is easier than you might at first think. Got a Sqitch change that creates a table with two columns? Just SELECT from it:

SELECT user_id, name
  FROM user
 WHERE FALSE;

If the table does not exist, the query will die. Got a change that creates a function? Make sure it was created by checking a privilege:

SELECT has_function_privilege('insert_user(text, text)', 'execute');

PostgreSQL will throw an error if the function does not exist. Not running PostgreSQL? Well, you’re probably not using Sqitch yet, but if you were, you might force an error by dividing by zero. Here’s an example verifying that a schema exists:

SELECT 1/COUNT(*)
  FROM information_schema.schemata
 WHERE schema_name = 'myapp';

At this point, Sqitch doesn’t care at all what you put into your verify scripts. You just need to make sure that they indicate failure by throwing an error when passed to the database command-line client.

The best time to run a change verify script is right after deploying the change. The --verify option to the deploy command does just that. If a verify script fails, the deploy is considered to have failed. Here’s what failure looks like:

> sqitch deploy
Deploying changes to flipr_test
  + appschema ................. ok
  + users ..................... ok
  + insert_user ............... ok
  + change_pass @v1.0.0-dev1 .. ok
  + lists ..................... psql:verify/lists.sql:7: ERROR:  column "timestamp" does not exist
LINE 1: SELECT nickname, name, description, timestamp
                                            ^
Verify script "verify/lists.sql" failed.
not ok
Reverting all changes
  - change_pass @v1.0.0-dev1 .. ok
  - insert_user ............... ok
  - users ..................... ok
  - appschema ................. ok
Deploy failed

Good, right? In addition, you can always verify the state of a database using the verify command. It runs the verify scripts for all deployed changes. It also ensures that all the deployed changes were deployed in the same order as they’re listed in the plan, and that no changes are missing. The output is similar to that for deploy:

> sqitch verify
Verifying flipr_test
  * appschema ................. ok
  * users ..................... ok
  * insert_user ............... ok
  * change_pass @v1.0.0-dev1 .. ok
  * lists ..................... ok
  * insert_list ............... ok
  * delete_list ............... ok
  * flips ..................... ok
  * insert_flip ............... ok
  * delete_flip @v1.0.0-dev2 .. ok
  * pgcrypto .................. ok
  * insert_user ............... ok
  * change_pass ............... ok
Verify successful

Don’t want verification tests/scripts? Use --no-verify when you call sqitch add and none will be created. Or tell it never to create verify scripts by setting the turning off the add.with_verify option:

sqitch config --bool add.with_verify no

If you somehow run deploy --verify or verify anyway, Sqitch will emit a warning for any changes without verify scripts, but won’t consider them failures.

Up Front Dependency Checking

The other significant change in v0.950 is that the deploy and revert commands (and, by extension the rebase command) now verify that dependencies have been checked before deploying or reverting anything. Previously, Sqitch checked the dependencies for each change before deploying it, but it makes much more sense to check them for all changes to be deployed before doing anything at all. This reduces the chances of unexpected reversions.

Still hacking on Sqitch, of course, though nearly all the commands I initially envisioned are done. Next up, I plan to finally implement support for SQLite, add a few more commands to simplify plan file modification, and to create a new site, since the current site is woefully out-of-date. Until then, though, check out this presentation and, of course, the tutorial.

Sqitch Update: All Your Rebase Are…Never Mind

I’m pleased to announce the release of Sqitch v0.940. The focus of this release? Sanity.

I’ve been doing a lot of Sqitch-based database development at work. Overall it has worked quite well. Except for one thing: often the order in which changes would be arranged would change from one run to the next. Oy.

Out of Order

The reason? The plan parser would perform a topological sort of all the changes between tags based on their dependencies. I’ve been careful, for the most part, to keep my changes in the proper order in our plan files, but the topological sort would often pick a different order. Still valid in terms of dependency ordering, but different from the plan file.

Given the same inputs, the sort always produced the same order. However, whenever I added a new changes (and I do that all the time while developing), there would then be a new input, which could result in a completely different order. The downside is that I would add a change, run sqitch deploy, and it would die because it thought something needed to be deployed that had already been deployed, simply because it sorted it to come after an undeployed change. So annoying.. It also caused problems in for production deployments, because different machines with different Perls would sort the plans in different ways.

So I re-wrote the sorting part of the the plan parser so that it no longer sorts. The list of changes is now always identical to the order in the plan file. It still checks dependencies, of course, only now it throws an exception if it finds an ordering problem, rather than re-ordering for you. I’ve made an effort to tell the user how to move things around in the plan file to fix ordering issues, so hopefully everything will be less mysterious.

Of course, many will never use dependencies, in which case this change has effect. But it was important to me, as I like to specify dependencies as much as I can, for my own sanity.

See? There’s that theme!

Everyone has a Mom

Speaking of ordering, as we have been starting to do production deployments, I realized that my previous notion to allow developers to reorder changes in the plan file without rebuilding databases was a mistake. It was too easy for someone to deploy to an existing database and miss changes because there was nothing to notice that changes had not been deployed. This was especially a problem before I addressed the ordering issue.

Even with ordering fixed, I thought about how git push works, and realized that it was much more important to make sure things really were consistent than it was to make things slightly more convenient for developers.

So I changed the way change IDs are generated. The text hashed for IDs now includes the ID of the parent change (if there is one), the change dependencies, and the change note. If any of these things change, the ID of the change will change. So they might change a lot during development, while one moves things arounds, changes dependencies, and tweaks the description. But the advantage is for production, where things have to be deployed exactly right, with no modifications, or else the deploy will fail. This is sort of like requiring all Git merges to be fast-forwarded, and philosophically in line with the Git practice of never changing commits after they’re pushed to a remote repository accessible to others.

Curious what text is hashed for the IDs? Check out the new show command!

Rebase

As a database hacker, I still need things to be relatively convenient for iterative development. So I’ve also added the rebase command. It’s simple, really: It just does a revert and a deploy a single command. I’m doing this all day long, so I’m happy to save myself a few steps. It’s also nice that I can do sqitch rebase @HEAD^ to revert and re-apply the latest change over and over again without fear that it will fail because of an ordering problem. But I already mentioned that, didn’t I?

Order Up

Well, mostly. Another ordering issue I addressed was for the revert --to option. It used to be that it would find the change to revert to in the plan, and revert based on the plan order. (And did I mention that said order might have changed since the last deploy?) v0.940 now searches the database for the revert target. Not only that, the full list of changes to deploy to revert to the target is also returned from the database. In fact, the revert no longer consults the plan file at all. This is great if you’ve re-ordered things, because the revert will always be the reverse order of the previous deploy. Even if IDs have changed, revert will find the changes to revert by name. It will only fail if you’ve removed the revert script for a change.

So simple, conceptually: revert reverts in the proper order based on what was deployed before. deploy deploys based on the order in the plan.

Not @FIRST, Not @LAST

As a result of the improved intelligence of revert, I have also deprecated the @FIRST and @LAST symbolic tags. These tags forced a search of the database, but were mainly used for revert. Now that revert always searches the database, there’s nothing to force. They’re still around for backward compatibility, but no longer documented. Use @ROOT and @HEAD, instead.

Not Over

So lots of big changes, including some compatibility changes. But I’ve tried hard to make them as transparent as possible (old IDs will automatically be updated by deploy). So take it for a spin!

Meanwhile, I still have quite a few other improvements I need to make. On my short list are:

New in PostgreSQL 9.2: format()

There’s a new feature in PostgreSQL 9.2 that I don’t recall seeing blogged about elsewhere: the format() function. From the docs:

Format a string. This function is similar to the C function sprintf; but only the following conversion specifications are recognized: %s interpolates the corresponding argument as a string; %I escapes its argument as an SQL identifier; %L escapes its argument as an SQL literal; %% outputs a literal %. A conversion can reference an explicit parameter position by preceding the conversion specifier with n$, where n is the argument position.

If you do a lot of dynamic query building in PL/pgSQL functions, you’ll immediately see the value in format(). Consider this function:

CREATE OR REPLACE FUNCTION make_month_partition(
    base_table   TEXT,
    schema_name  TEXT,
    month        TIMESTAMP
) RETURNS VOID LANGUAGE plpgsql AS $_$
DECLARE
    partition TEXT := quote_ident(base_table || '_' || to_char(month, '"y"YYYY"m"MM'));
    month_start TIMESTAMP := date_trunc('month', month);
BEGIN
    EXECUTE '
        CREATE TABLE ' || quote_ident(schema_name) || '.' || partition || ' (CHECK (
               created_at >= ' || quote_literal(month_start) || '
           AND created_at < '  || quote_literal(month_start + '1 month'::interval) || '
        )) INHERITS (' || quote_ident(schema_name) || '.' || base_table || ')
    ';
    EXECUTE 'GRANT SELECT ON ' || quote_ident(schema_name) || '.' || partition || '  TO dude;';
END;
$_$;

Lots of concatenation and use of quote_ident() to get things just right. I don’t know about you, but I always found this sort of thing quite difficult to read. But format() allows use to eliminate most of the operators and function calls. Check it:

CREATE OR REPLACE FUNCTION make_month_partition(
    base_table   TEXT,
    schema_name  TEXT,
    month        TIMESTAMP
) RETURNS VOID LANGUAGE plpgsql AS $_$
DECLARE
    partition TEXT := base_table || '_' || to_char(month, '"y"YYYY"m"MM');
    month_start TIMESTAMP := date_trunc('month', month);
BEGIN
    EXECUTE format(
        'CREATE TABLE %I.%I (
            CHECK (created_at >= %L AND created_at < %L)
        ) INHERITS (%I.%I)',
        schema_name, partition,
        month_start, month_start + '1 month'::interval,
        schema_name, base_table
    );
    EXECUTE format('GRANT SELECT ON %I.%I TO dude', schema_name, partition);
END;
$_$;

I don’t know about you, but I find that a lot easier to read. which means it’ll be easier to maintain. So if you do much dynamic query generation inside the database, give format() a try, I think you’ll find it a winner.

Update 2012-11-16: Okay, so I somehow failed to notice that format() was actually introduced in 9.1 and covered by depesz. D’oh! Well, hopefully my little post will help to get the word out more, at least. Thanks to my commenters.

Thinking about Changing Sqitch Change IDs

When Sqitch, (the database change management app I’ve been working on for the last several months) parses a deployment plan, it creates a unique ID for each change in the plan. This ID is a SHA1 hash generated from information about the change, which is a string that looks something like this:

project flipr
change add_users_table
planner Marge N. O’Vera <marge@example.com>
date 2012-11-14T01:10:13Z

The nice thing about the ID is that it’s unique: it’s unlikely that the same user with the same email address will add a change with the same name to a project with the same name within a single second. If the plan includes a URI, that’s included, too, for additional uniqueness.

Note, however, that it does not include information about any other changes. Git, from which I modeled the generation of these IDS, always includes the parent commit SHA1 in its uniquely-identifying info. An example:

> git cat-file commit 744c01bfa3798360c1792a8caf784b650e52d89e               
tree d3a64897cca4538ff5c0c41db3f82ab033a09bec
parent 482a79ae2cda5085eed731be2e70739ab37997ee
author David E. Wheeler <david@justatheory.com> 1337355746 -0400
committer David E. Wheeler <david@justatheory.com> 1337355746 -0400

Timestamp v0.30.

The reason Git does this is so that a commit is not just uniquely identified globally, but so that it can only follow an existing commit. Mark Jason Dominus calls this Linus Torvalds' greatest invention. Why? This is now Git knows it can fast-forward changes.

Why doesn’t Sqitch do something similar? My original thinking had been to make it easier for a database developer to do iterative development. And one of the requirements for that, in my experience, is the ability to freely reorder changes in the plan. Including the SHA1 of the preceding change would make that trickier. But it also means that, when you deploy to a production database, you lose that extra layer of security that ensures that, yes, the next change really should be deployed. That is, it would be much harder to deploy with changes missing or changed from what was previously expected. And I think that’s only sane for a production environment.

Given that, I’ve started to rethink my decision to omit the previous change SHA1 from the identifier of a change. Yes, it could be a bit more of hassle for a developer, but not, I think, that much of a hassle. The main thing would be to allow reverts to look up their scripts just by change name or even file name, rather than ID. We want deploys to always be correct, but I’m thinking that reverts should always just try very hard to remove changes. Even in production.

I am further thinking that the ID should even include the list of prerequisite changes for even stronger identification. After all, one might change just the dependencies and nothing else, but it would still be a different change. And maybe it should include the note, too? The end result would be a hash of something like this:

project flipr
change add_users_table
parent 7cd96745746cd6baa5da352de782354b21838b25
requires [schemas roles common:utils]
planner Marge N. O’Vera <marge@example.com>
date 2012-11-14T01:10:13Z

Adds the users table to the database.

This will break existing installations, so I’d need to add a way to update them, but otherwise, I think it might be a win overall. Thoughts?

Mocking Serialization Failures

I’ve been hacking on the forthcoming Bucardo 5 code base the last couple weeks, as we’re going to start using it pretty extensively at work, and it needed a little love to get it closer to release. The biggest issue I fixed was the handling of serialization failures.

When copying deltas from one database to another, Bucardo sets the transaction isolation to “Serializable”. As of PostgreSQL 9.1, this is true serializable isolation. However, there were no tests for it in Bucardo. And since pervious versions of PostgreSQL had poorer isolation (retained in 9.1 as “Repeatable Read”), I don’t think anyone really noticed it much. As I’m doing all my testing against 9.2, I was getting the serialization failures about half the time I ran the test suite. It took me a good week to chase down the issue. Once I did, I posted to the Bucardo mail list pointing out that Bucardo was not attempting to run a transaction again after failure, and at any rate, the model for how it thought to do so was a little wonky: it let the replicating process die, on the assumption that a new process would pick up where it left off. It did not.

Bucardo maintainer Greg Sabino Mullane proposed that we let the replicating process try again on its own. So I went and made it do that. And then the tests started passing every time. Yay!

Returning to the point of this post, I felt that there ought to be tests for serialization failures in the Bucardo test suite, so that we can ensure that this continues to work. My first thought was to use PL/pgSQL in 8.4 and higher to mock a serialization failure. Observe:

david=# \set VERBOSITY verbose
david=# DO $$BEGIN RAISE EXCEPTION 'Serialization error'
       USING ERRCODE = 'serialization_failure'; END $$;
ERROR:  40001: Serialization error
LOCATION:  exec_stmt_raise, pl_exec.c:2840

Cool, right? Well, the trick is to get this to run on the replication target, but only once. When Bucardo retries, we want it to succeed, thus properly demonstrating the COPY/SERIALIZATION FAIL/ROLLBACK/COPY/SUCCESS pattern. Furthermore, when it copies deltas to a target, Bucardo disables all triggers and rules. So how to get something trigger-like to run on a target table and throw the serialization error?

Studying the Bucardo source code, I discovered that Bucardo itself does not disable triggers and rules. Rather, it sets the session_replica_role GUC to “replica”. This causes PostgreSQL to disable the triggers and rules — except for those that have been set to ENABLE REPLICA. The PostgreSQL ALTER TABLE docs:

The trigger firing mechanism is also affected by the configuration variable session_replication_role. Simply enabled triggers will fire when the replication role is “origin” (the default) or “local”. Triggers configured as ENABLE REPLICA will only fire if the session is in “replica” mode, and triggers configured as ENABLE ALWAYS will fire regardless of the current replication mode.

Well how cool is that? So all I needed to do was plug in a replica trigger and have it throw an exception once but not twice. Via email, Kevin Grittner pointed out that a sequence might work, and indeed it does. Because sequence values are non-transactional, sequences return different values every time they’re access.

Here’s what I came up with:

CREATE SEQUENCE serial_seq;

CREATE OR REPLACE FUNCTION mock_serial_fail(
) RETURNS trigger LANGUAGE plpgsql AS $_$
BEGIN
    IF nextval('serial_seq') % 2 = 0 THEN RETURN NEW; END IF;
    RAISE EXCEPTION 'Serialization error'
          USING ERRCODE = 'serialization_failure';
END;
$_$;

CREATE TRIGGER mock_serial_fail AFTER INSERT ON bucardo_test2
    FOR EACH ROW EXECUTE PROCEDURE mock_serial_fail();
ALTER TABLE bucardo_test2 ENABLE REPLICA TRIGGER mock_serial_fail;

The first INSERT (or, in Bucardo’s case, COPY) to bucardo_test2 will die with the serialization error. The second INSERT (or COPY) succeeds. This worked great, and I was able to write test in a few hours and get them committed. And now we can be reasonably sure that Bucardo will always properly handle serialization failures.

Sqitch Symbolism

It has been a while since I last blogged about Sqitch. The silence is in part due to the fact that I’ve moved from full-time Sqitch development to actually putting it to use building databases at work. This is exciting, because it needs the real-world experience to grow up.

That’s not to say that nothing has happened with Sqitch. I’ve just released v0.931 which includes a bunch of improvement since I wrote about v0.90. First a couple of the minor things:

  • Sqitch now checks dependencies before reverting, and dies if they would be broken by the revert. This change, introduced in v0.91, required that the dependencies be moved to their own table, so if you’ve been messing with an earlier version of Sqitch, you’ll have to rebuild the database. Sorry about that.
  • I fixed a bunch of Windows-related issues, including finding the current user’s full name, correctly setting the locale for displaying dates and times, executing shell commands, and passing tests. The awesome ActiveState PPM Index has been invaluable in tracking these issues down.
  • Added the bundle command. All it does is copy your project configuration file, plan, and deploy, revert, and test scripts to a directory you identify. The purpose is to be able to export the project into a directory structure suitable for distribution in a tarball, RPM, or whatever. That my not sound incredibly useful, since copying files is no big deal. However, the long-term plan is to add VCS support to Sqitch, which would entail fetching scripts from various places in VCS history. At that point, it will be essential to use bundle to do the export, so that scripts are properly exported from the VCS history. That said, I’m actually using it already to build RPMs. Useful already!

Symbolic References

And now the more immediately useful changes. First, I added new symbolic tags, @FIRST and @LAST. These represent the first and last changes currently deployed to a database, respectively. These complement the existing @ROOT and @HEAD symbolic tags, which represent the first and last changes listed in the plan. The distinction is important: The change plan vs actual deployments to a database.

The addition of @FIRST and @LAST may not sounds like much, but there’s more.

I also added forward and reverse change reference modifiers ^ and ~. The basic idea was stolen from Git Revisions, though the semantics vary. For Sqitch changes, ^ appended to a name or tag means “the change before this change,” while ~ means “the change after this change”. I find ^ most useful when doing development, where I’m constantly deploying and reverting a change as I work. Here’s how I do that revert:

sqitch revert --to @LAST^

That means “revert to the change before the last change”, or simply “revert the last change”. If I want to revert two changes, I use two ^s:

sqitch revert --to @LAST^^

To go back any further, I need to use an integer with the ^. Here’s how to revert the last four changes deployed to the database:

sqitch revert --to @LAST^4

The cool thing about this is that I don’t have to remember the name of the change to revert, as was previously required. And of course, if I just wanted to deploy two changes since the last deployment, I would use ~~:

sqitch deploy --to @LAST~~

Nice, right? One thing to bear in mind, as I was reminded while giving a Sqitch presentation to PDXPUG: Changes are deployed in a sequence. You can think of them as a linked list. So this command:

sqitch revert @LAST^^

Does not mean to revert the second-to-last change, leaving the two after it. It will revert the last change and the penultimate change. This is why I actually encourage the use of the --to option, to emphasize that you’re deploying or reverting all changes to the named point, rather than deploying or reverting the named point in isolation. Sqitch simply doesn’t do that.

Internationalize Me

One more change. With today’s release of v0.931, there is now proper internationalization support in Sqitch. The code has been localized for a long time, but there was no infrastructure for internationalizing. Now there is, and I’ve stubbed out files for translating Sqitch messages into French and German. Adding others is easy.

If you’re interested in tranlating Sqitch’s messages (only 163 of them, should be quick!), just fork Sqitch, juice up your favorite gettext editor, and start editing. Let me know if you need a language file generated; I’ve built the tools to do it easily with dzil, but haven’t released them yet. Look for a post about that later in the week.

Presentation

Oh, and that PDXPUG presentation? Here are the slides. Enjoy!

Sane SQL Change Management with Sqitch from David E. Wheeler