Just a Theory

Trans rights are human rights

More App::Info

Yes, I put out a new version of App::Info today. Well, a couple of versions, actually.

First of all, all the problems with the unit tests should be fixed. They actually aren’t all that comprehensive, but since the values returned form the various methods can vary, it didn’t make sense for them to be super precise. I have internal unit tests that are more precise, and they don’t execute when folks download App::Info from the CPAN.

But the major change in version 0.10 is the addition of error levels. Now when you construct an App::Info object, you can specify an error level that corresponds to a Carp function. Subclass writers (currently just your truly) then just have to use the error() method to record errors (although serious problems should probably just croak. Client code can specify how it wants to handle errors. The default is “carp”, but the CPAN unit tests, for example, use “silent”.

This functionality makes App::Info much more customizable for creating installation utilities, as the problems the subclasses run into – such as not being able to find a files they need, or not being able to parse a value from a file – can be set to be as verbose as necessary.

Next up, Matt Seargent suggests that I borrow from the AxKit Makefile.PL to make interrogating libiconv much more robust. Maybe in a couple of weeks. I’ve been back-burnering work that I really need to get done in order to work on this!

Originally published on use Perl;

All about App::Info

Yesterday I released App::Info to the CPAN. I started it on Friday, and put the finishing touches on it just yesterday. It was a busy weekend.

I got the idea for App::Info after looking at the work Sam Tregar has done building an installer for Bricolage. He had done all this work to determine whether and how Apache, PostgreSQL, Expat, and libiconv had been installed, and it seemed a shame not to take that code and generalize it for use by others. So I whipped up App::Info, with the idea of building a framework for aggregating data about applications of all kinds, specifically for the purpose of determining dependencies before installing software.

I think it has turned out rather well so far. I added code to determine the version numbers of libiconv and Expat, although it’s imperfect (and that accounts for the CPAN-Testers failures – I’ll have a better release with some of the bugs fixed shortly). But overall the idea is for this to be a uniform architecture for learning about software installed on a system, and I’d like to invite folks to contribute new App::Info subclasses that provide metadata for the applications for which they’re most familiar.

That said, this is a new module, and still in flux. I’ve been talking to Dave Rolsky about it, as he has been thinking about the need for something like this, himself. In the past, Dave and I have talked about creating a generalized API for installing software, and Dave has even set up a Savannah project for that purpose. In truth, I had envisioned App::Info as one part of such an initiative – the part responsible for determining what’s already installed. And while the API I’ve created is good for this, Dave points out that it’s not enough. We need something that can also prompt the user for information – to determine if the right copy of an application was found, for example.

I think I can work this in to App::Info relatively easily, however. Currently, if App::Info can’t find the data it needs, it issues warnings. But this isn’t the best approach, I think. Sometimes, you might want such errors to trigger exceptions. Other times, you might want them totally silent. So I was planning to add a flag to the API such that you can specify the behavior for such errors. Something like DBI’s RaiseError or PrintError options. But then, it’s just another step to add a prompting option. Such an option can be changed to prompt for new data at every step of the process, or only at important points (like finding the proper copy of httpd on the file system) or only when data can’t be found.

So I hope to find the tuits to add this functionality in the next week. In the meantime, I’m going to try to keep up-to-date on my journal more.

Originally published on use Perl;

My OS X Adventures

About two weeks ago, I made the switch from Linux to Mac OS X on my Titanium PowerBook. Yes, it was quite an adventure. But I’m happy to report that it went extremely well. I got all of my favorite *nix utilities running, as well as all the essentials a dedicated Perl/mod_perl/PostgreSQL hacker might want.

That’s not to say that it wasn’t a lot of work. But even more work was all the notes I was keeping so that I’d know how to do it again. And since I was taking all those notes, I figured I might as well do the extra extra work of writing up a little article detailing what I did and how I did it. The result is now available as My Mac OS X Adventures.

Any feedback will be most welcome, as I’d like to keep this page up-to-date!

Originally published on use Perl;

Catch Up

It’s good to finally be caught up with all the Bricolage posts I’ve been just been ignoring for the last week or so. That’s not to say that I’m totally caught up – there are still a couple of open bugs I need to tackle. But there are a number of important conversations going on on the development list, and I’ve been meaning to reply to them for a while. It feels good to finally get to them.

Among the issues we’re looking at right now are adding a preference to allow users to specify how URIs are created, and a complete reworking of the element system so that it’s more consistent and doesn’t seem so bolted together. I think that 1.6 will turn out to be the killer version of Bricolage — the one that really makes its mark — if we go forward with the changes. And I can see no reason why we wouldn’t!

Meanwhile, my new DSL bridge finally arrived, so I’m going to spend some time getting that up and going. It seems like it take a whole lot more set up than my old one, but whatever! I’m paying a lot less and getting just as much. Hopefully it’ll be done by tomorrow and I can leave the modem to rot and die.

Originally published on use Perl;


I worked a lot over the weekend on the Bricolage section of the upcoming O’Reilly book on Mason, and now it’s done. I mean, it’s a draft. Naturally, I’ll want to go in and edit it again in a few weeks, once I have some distance from it.

Dave and Ken were kind enough to let me rewrite their section on content management, since for all intents and purposes Mason-CM appears to be dead, and Bricolage is quite actively maintained. I appreciate their allowing me to contribute – I think this will be good for Mason and for Bricolage.

At any rate, after a busy couple of weeks, I’m ready to get back to plugging bugs in Bricolage and planning its future. I’m a bit behind. If I didn’t have to spend so much time trying to hustle up work, it would help. Anyone want to hire an experienced Perl hacker and Bricolage developer?

Originally published on use Perl;

Release, Release, Release!

Well, I got Bricolage 1.3.0 out yesterday. It’s a development release for the upcoming 1.4.0 release. There are two major new features in 1.3.0. The big one is a SOAP server. Sam Tregar has been hard at work on this puppy. It promises to simplify the process of autopublishing stories, and to make importing and exporting assets and elements a no-brainer. I say kudos to Sam for his hard work.

The second major new feature is a real live configure process. Mark Jaroski of the World Health Organization developed this for us using Autoconf. Unfortunately, it wasn’t ready in time for the 1.3.0 release, but it’s already looking better and should be in 1.3.1 in a couple of days. Meanwhile, I need to get 1.2.1 out. This is mostly a bug-fix release of Bricolage, although there is one new feature. A new module greatly simplifies the process of Apache configuration, making it easy to, among other things, run Bricolage on a virtual host. The one drawback to this feature is that it relies heavily on mod_perl <Perl> sections, and these are somewhat broken, although there is a patch. Other than that, there are loads of bug fixes in 1.2.1, so look for it soon!

Originally published on use Perl;

mod_perl Bug Confirmed!

The mod_perl bug that I reported finding last week has been confirmed and a patch supplied by Salvador Ortiz Garcia. Read all about it here).

It turns out to be uglier than I thought, because what Location and Directory locations that mod_perl decides to “upgrade” to their Match versions is random. Actually, we’ve found it to be consistent in Bricolage (the relevant source code is here). That is, although we can’t predict which directives mod_perl will “upgrade”, it does tend to “upgrade” the same ones every time. This allows me to check the mod_perl version and try to do the right thing regardless. Maybe we’ll require mod_perl 1.27 when it finally comes out.

But at any rate, I’m glad to have the thing addressed and understood. It’s not common that I notice a bug in Perl or mod_perl, and it’s rewarding to see someone pick it up and address it quickly. Thanks Salvador!

Originally published on use Perl;

You’re So Vain

I’d just like to announce that I finally have a personal web page up. It has been years since I took down my “Hotlinks 1995!” page at U.Va., and the time finally arrived for me to have my own site again. Comments on the content and layout are most welcome.

Later, I’ll post a rant on how I exploited use Perl; for my own nefarious purposes.

Originally published on use Perl;

A mod_perl Bug?

I think I’ve found a bug in the <Perl> sections of mod_perl. More information can be found here. The upshot is that the Location directive, when used in <Perl> sections, seems to be used internally as a LocationMatch directive instead.

I ran into this because I was simplifying Bricolage’s Apache configuration. I moved the whole complex Apache configuration into a Bricolage module, so that it’s much simpler to configure Apache to run Bricolage — and much easier to use virtual hosts. (This is all in Bricolage’s CVS, BTW — it’s not yet released). I got around the problem by specifying all of my Location directives with a caret (^) prepended to them so that they behave like a regex version of Location (i.e., LocationMatch), but I’m kinda annoyed to have to do that. Am I right in thinking that the LocationMatch directives add a bit more overhead to every request?

Originally published on use Perl;

Software Development Methodology

I feel that it’s important to have a comprehensive approach to software development. It’s not enough to be good at coding, or testing, or writing documentation. It’s far better to excel at managing every step of the development process in order to ensure the quality and consistency of the end-to-end work as well as of the final product. I aim to do just that in my work. Here I briefly outline my methodology for achieving that aim.

First, good software development starts with good planning and research. I strive to attain a thorough understanding of what I’m developing by listening to the people to whom it matters most: the users. By gaining insight into how people in the target market think about the problem space, and by strategizing about how technology can address that space, a picture of the product takes shape. This research coalesces into a set of pragmatic requirements and goals that balance the demands of a realistic development schedule with the needs and desires of the target market.

Once the requirements have been identified, it’s time for prototyping. Task flow diagrams of user interactions model the entire system. Evaluations from the target market refine these schematics, shaping the look and feel of the final product. I cannot emphasize enough the importance of seeking market feedback to build solid and meaningful metaphors into the design. These concepts drive the user experience and make or break the success of the final product. The outcome of this feedback loop will be a UI, terminology, and object design grounded on intuitive concepts, scalable technologies, and a reliable architecture.

Next, a talented development team must be assembled and backed by a dependable, project management-oriented implementation infrastructure. Team-building is crucial for the success of any product, and in software development, a diverse set of engineers and specialists with complementary talents must come together and work as an efficient whole. As a result, I consider it extremely important to create a working culture of which team members want to be a part. Such an environment doesn’t foster a sense of entitlement, but rather of conviviality and excitement. If team members believe in what they’re doing, and they enjoy doing it, then they’re likely to do it well.

And what they’ll do is actually create the software. Each element of the product design must be broken down into its basic parts, fit into a generalizable design, and built back up into meaningful objects. I further require detailed documentation of every interface and implementation, as well as thorough unit testing. In fact, the tests are often written before the interfaces are written, ensuring that they will work as expected throughout the remainder of the development process. All aspects of the application must be implemented according to a scalable, maintainable methodology that emphasizes consistency, quality, and efficiency.

The emphasis on quality naturally continues into the quality assurance phase of the development process. The feature set is locked so that development engineers can work closely with QA engineers to test edge conditions, identify bugs, fix them, and ensure that they remain fixed. I prefer to have QA engineers punish nightly builds with suites of tests while development engineers fix the problems identified by previous days’ tests. QA is considered complete when the product passes all the tests we can dream up.

And finally, once all of the QA issues have been addressed, the final product is delivered. Naturally, the process doesn’t stop there, but starts over – in fact, it likely has already started over. New features must be schematically tested with likely users, and new interfaces designed to implement them. The idea is to end up with a solid product that can grow with the needs of the target market.

Looking for the comments? Try the old layout.

Tuesday’s SF.pm Meeting

Tuesday night was the monthly San Francisco Perl Mongers meeting. Invited to speak that night was none other than Randal L. Schwartz, who spoke on Perl 6. But there were a few last-minute surprise guests: Larry Wall and Chip Salzenberg! What with these three participating alongside SF.pm’s usual suspects (Vicki Brown, Rich, Morin, et al.), it was an evening of the heavyweights unlike we’ve never seen at SF.pm. Randal invited Larry to come while they were on the cruise last week, and it was great to have him there. I guess Randal never gave a talk like this with Larry in the audience before, because he said he was actually nervous, despite his years teaching!

A good time was had by all, I think, and we all learned a lot about Perl 6. Having Larry there to chime in (and read his own quotes from Randal’s slides!) really added to the content of the talk. I, for one, am incredibly excited about Perl 6. It’ll go a long way toward making the big database applications I’ve gotten into the habit of writing much easier to control. And the syntax Larry is outlining in his Apocalypses is really coming along. I just wish I didn’t have to wait for it until…when? There are 33 chapters in Camel trois, and we’re only up to [Apocalypse 4!]

At any rate, it was a great meeting. The after-dinner talk in the bar was good, too. Really a very entertaining buch. I’d publicly like to thank Quinn Weaver <zenji at gmx dot net> and Karen Andelin <kaan at chevrontexaco to com> for organizing a great event – and for getting other members to contribute en masse to the Perl Foundation 2002 Grant fund. And thanks also to Randal (who promised that Stonehenge Consulting Services would match the funds raised at the meeting), Larry, and Chip for making it a very worthwhile event!

Originally published on use Perl;

CVS Branching Philosophy

“This will go down in [my] permanent record,” eh? I guess I’d better make it good.

There’s been quite the debate going on over on the Bricolage developers list. For those who don’t know about Bricolage, its a full-featured, open-source, 100% Perl content management system that I maintain on SourceForge. You can learn more about it here.

Anyway, the debate is over the art and science of CVS management. We’ve been adding features to both minor and major releases up to now, but there has been substantial argument that the minor releases should be bug-fix only. The advantage of this approach is that new code won’t threaten stable releases. The disadvantage is that it could slow development. Quick and easy new features will have to wait for more involved features to be complete before they can see the light of day of a release.

There are some strong opinions, but I’m currently sitting on the fence. More opinions are welcome!

Originally published on use Perl;