Just a Theory

Black lives matter

T?iBooks

It’s really quite stunning the number of Apple notebook computers there are at OSCON. It’s a sign of Apple’s remarkable hardware design and of the attractiveness of Mac OS X that so many Perl developers are making the switch. And it’s not just the Perl developers. A friend of mine who is an active member of the Ant (the Java build tool) development team just made the leap (from Windows, no less), and he’s pretty excited about it. I swear that over half the notebook computers I’ve seen people using here are either iBooks or TiBooks, with a smattering of PowerBooks, to boot. From where I’m sitting at the moment, 3 of the 4 computers I can see (excluding my own TiBook!) are Macs.

This can only be good for the Mac platform. As someone who recently returned to the Mac OS fold (after a few years of Windows and then a few years of Linux), I’m thrilled to see how attractive the combination of the Mac UI and the Unix guts is to Unix-oriented developers. I love that things, as Ziggy says, just work, and I love that I can get all the Unix power tools I need running, and that I am able to run all of this great software on the slickest hardware to be found.

I firmly believe that things are looking very very good in the Mac OS X/Unix software market, and it’s because all of the geeks (all of the alpha geeks Tim O’Reilly might say) are coming to the platform, and contributing great new software that will only make it better. And I’m going to thoroughly enjoy the ride

Originally published on use Perl;

DV Uploading…

I’m sitting in the Apple connectivity room downloading a DV from my camera to a nice G4 workstation. The video is complete coverage of Damian’s “Preparing for Perl 6” presentation this afternoon. Once it’s on the Mac, Nat will take care of making it available to the general population. I’ve also recorded the talk that he gave with Larry this morning, and will download that, too. But first, I have to find some dinner. God, I hope I don’t sound too goofy laughing all the time!

For those of you who aren’t here, TPC is going quite well. The Lightening round was entertaining, too — Nat arranged for it to be recorded, too. It has been great to meet in person all these folks I’ve read or corresponded with over the last few years.

Okay, the video is just about downloaded, and I’m starving. More later.

Originally published on use Perl;

OSCON Bound

I’m off to OSCON. I just noticed that I’m going to be arriving after the State of the Onion. Crap! Well, I’m sure I can find someone to fill me in on all of the gory details.

I’m staying at the conference hotel, so look me up if you want to have a beer or something.

Originally published on use Perl;

The Main Event

I released a new version of App::Info on Thursday. This is a major new version because I’ve added a new feature: event handling.

Dave Rolsky had raised some issues regarding how App::Info clients might be able to interact with the API in order to confirm the data it has found or to help it find data that it can’t on its own. I had been thinking of App::Info as a non-interactive API on which interactive APIs could be built. In other words, if someone got data from App::Info, but wanted to confirm it via some interface, they would have to write the wrapper code around App::Info to do it.

But I started thinking about the problem, since, as Dave pointed out, such a wrapper would be very thin. It seemed unnecessary. I didn’t want to just add code to the App::Info subclasses that would prompt users and such, or issue print statements to notify the user that something had happened, so I pondered on other, more elegant solutions. I was somewhat stumped until I happened to be leafing through the GOF, where the chain of responsibility pattern hit me as a near ideal solution.

This pattern inspired a major new feature for App::Info: events and event handling. I added methods to the App::Info base class that can be used by subclasses to trigger different kinds of events. By default, these event requests aren’t handled, but clients can associate event handler objects with a given App::Info object, and those handlers can handle the event requests any way they please. The advantage to this approach is that, by and large, subclass implementors don’t have to think about how to handle certain types of events, only where they need to trigger them. At the same time, the pattern frees App::Info users to handle those events in any way they wish. If they want to ignore them, they can. If they want to print them to STDOUT or to a log file, they can. If they want to prompt the user for more information, why, they can do that, too.

The result is what I think of as a really solid API for gathering information about locally-installed software in a highly flexible and configurable fashion. There are four different types of events:

  • info events, which simply send a message describing what the object is doing.

  • error events, which send a message when something has gone wrong – i.e., non-fatal errors.

  • unknown events, which occur when the object is not able to collect a relevant piece of data on its own.

  • confirm events, which are triggered when a central piece of information has been collected, and the object needs to ensure that it’s the correct data.

Any one or all of these types of events can be handled or not, by one event handling object or different ones, however the client user sees fit. A single event can even be handled by multiple events! I also provided some example event handler classes that will likely cover the majority of uses. They print messages to file handles, trigger Carp functions, or prompt the user for data to be entered. But because of the event architecture, event handling is in no way limited to these approaches. Someone might want to write a handler that uses Log::Dispatch to record the events. Or maybe a developer wants to write a GUI installer, and so needs to handle unknown events by presenting a Tk dialog box to her users. The new event architecture allows these approaches and more.

I’d be interested in any feedback on this design. I gave it quite a bit of thought, and I think it’s pretty good. But I’m just one developer, and the opinions of others will help make it a better API going forward (just as Dave’s comments triggered this development). I’d also like to encourage folks to start thinking about new subclasses. There are a lot of software packages and libraries out there that people depend to get their work done, and, IMHO, App::Info provides a good standardized platform for determining those dependencies.

In the meantime, I think I’ll start by offering a patch to DBD::Pg’s Makefile.PL so that it can figure out where the PostgreSQL libraries are without forcing people to set the POSTGRES_INCLUDE and POSTGRES_LIB environment variables. Look for a patch later this week. I might also propose an OSCON lightening talk on this topic; I’ll have to give that some thought.

Originally published on use Perl;

More App::Info

Yes, I put out a new version of App::Info today. Well, a couple of versions, actually.

First of all, all the problems with the unit tests should be fixed. They actually aren’t all that comprehensive, but since the values returned form the various methods can vary, it didn’t make sense for them to be super precise. I have internal unit tests that are more precise, and they don’t execute when folks download App::Info from the CPAN.

But the major change in version 0.10 is the addition of error levels. Now when you construct an App::Info object, you can specify an error level that corresponds to a Carp function. Subclass writers (currently just your truly) then just have to use the error() method to record errors (although serious problems should probably just croak. Client code can specify how it wants to handle errors. The default is “carp”, but the CPAN unit tests, for example, use “silent”.

This functionality makes App::Info much more customizable for creating installation utilities, as the problems the subclasses run into – such as not being able to find a files they need, or not being able to parse a value from a file – can be set to be as verbose as necessary.

Next up, Matt Seargent suggests that I borrow from the AxKit Makefile.PL to make interrogating libiconv much more robust. Maybe in a couple of weeks. I’ve been back-burnering work that I really need to get done in order to work on this!

Originally published on use Perl;

All about App::Info

Yesterday I released App::Info to the CPAN. I started it on Friday, and put the finishing touches on it just yesterday. It was a busy weekend.

I got the idea for App::Info after looking at the work Sam Tregar has done building an installer for Bricolage. He had done all this work to determine whether and how Apache, PostgreSQL, Expat, and libiconv had been installed, and it seemed a shame not to take that code and generalize it for use by others. So I whipped up App::Info, with the idea of building a framework for aggregating data about applications of all kinds, specifically for the purpose of determining dependencies before installing software.

I think it has turned out rather well so far. I added code to determine the version numbers of libiconv and Expat, although it’s imperfect (and that accounts for the CPAN-Testers failures – I’ll have a better release with some of the bugs fixed shortly). But overall the idea is for this to be a uniform architecture for learning about software installed on a system, and I’d like to invite folks to contribute new App::Info subclasses that provide metadata for the applications for which they’re most familiar.

That said, this is a new module, and still in flux. I’ve been talking to Dave Rolsky about it, as he has been thinking about the need for something like this, himself. In the past, Dave and I have talked about creating a generalized API for installing software, and Dave has even set up a Savannah project for that purpose. In truth, I had envisioned App::Info as one part of such an initiative – the part responsible for determining what’s already installed. And while the API I’ve created is good for this, Dave points out that it’s not enough. We need something that can also prompt the user for information – to determine if the right copy of an application was found, for example.

I think I can work this in to App::Info relatively easily, however. Currently, if App::Info can’t find the data it needs, it issues warnings. But this isn’t the best approach, I think. Sometimes, you might want such errors to trigger exceptions. Other times, you might want them totally silent. So I was planning to add a flag to the API such that you can specify the behavior for such errors. Something like DBI’s RaiseError or PrintError options. But then, it’s just another step to add a prompting option. Such an option can be changed to prompt for new data at every step of the process, or only at important points (like finding the proper copy of httpd on the file system) or only when data can’t be found.

So I hope to find the tuits to add this functionality in the next week. In the meantime, I’m going to try to keep up-to-date on my journal more.

Originally published on use Perl;

My OS X Adventures

About two weeks ago, I made the switch from Linux to Mac OS X on my Titanium PowerBook. Yes, it was quite an adventure. But I’m happy to report that it went extremely well. I got all of my favorite *nix utilities running, as well as all the essentials a dedicated Perl/mod_perl/PostgreSQL hacker might want.

That’s not to say that it wasn’t a lot of work. But even more work was all the notes I was keeping so that I’d know how to do it again. And since I was taking all those notes, I figured I might as well do the extra extra work of writing up a little article detailing what I did and how I did it. The result is now available as My Mac OS X Adventures.

Any feedback will be most welcome, as I’d like to keep this page up-to-date!

Originally published on use Perl;

Catch Up

It’s good to finally be caught up with all the Bricolage posts I’ve been just been ignoring for the last week or so. That’s not to say that I’m totally caught up – there are still a couple of open bugs I need to tackle. But there are a number of important conversations going on on the development list, and I’ve been meaning to reply to them for a while. It feels good to finally get to them.

Among the issues we’re looking at right now are adding a preference to allow users to specify how URIs are created, and a complete reworking of the element system so that it’s more consistent and doesn’t seem so bolted together. I think that 1.6 will turn out to be the killer version of Bricolage — the one that really makes its mark — if we go forward with the changes. And I can see no reason why we wouldn’t!

Meanwhile, my new DSL bridge finally arrived, so I’m going to spend some time getting that up and going. It seems like it take a whole lot more set up than my old one, but whatever! I’m paying a lot less and getting just as much. Hopefully it’ll be done by tomorrow and I can leave the modem to rot and die.

Originally published on use Perl;

Whew!

I worked a lot over the weekend on the Bricolage section of the upcoming O’Reilly book on Mason, and now it’s done. I mean, it’s a draft. Naturally, I’ll want to go in and edit it again in a few weeks, once I have some distance from it.

Dave and Ken were kind enough to let me rewrite their section on content management, since for all intents and purposes Mason-CM appears to be dead, and Bricolage is quite actively maintained. I appreciate their allowing me to contribute – I think this will be good for Mason and for Bricolage.

At any rate, after a busy couple of weeks, I’m ready to get back to plugging bugs in Bricolage and planning its future. I’m a bit behind. If I didn’t have to spend so much time trying to hustle up work, it would help. Anyone want to hire an experienced Perl hacker and Bricolage developer?

Originally published on use Perl;

Release, Release, Release!

Well, I got Bricolage 1.3.0 out yesterday. It’s a development release for the upcoming 1.4.0 release. There are two major new features in 1.3.0. The big one is a SOAP server. Sam Tregar has been hard at work on this puppy. It promises to simplify the process of autopublishing stories, and to make importing and exporting assets and elements a no-brainer. I say kudos to Sam for his hard work.

The second major new feature is a real live configure process. Mark Jaroski of the World Health Organization developed this for us using Autoconf. Unfortunately, it wasn’t ready in time for the 1.3.0 release, but it’s already looking better and should be in 1.3.1 in a couple of days. Meanwhile, I need to get 1.2.1 out. This is mostly a bug-fix release of Bricolage, although there is one new feature. A new module greatly simplifies the process of Apache configuration, making it easy to, among other things, run Bricolage on a virtual host. The one drawback to this feature is that it relies heavily on mod_perl <Perl> sections, and these are somewhat broken, although there is a patch. Other than that, there are loads of bug fixes in 1.2.1, so look for it soon!

Originally published on use Perl;

mod_perl Bug Confirmed!

The mod_perl bug that I reported finding last week has been confirmed and a patch supplied by Salvador Ortiz Garcia. Read all about it here).

It turns out to be uglier than I thought, because what Location and Directory locations that mod_perl decides to “upgrade” to their Match versions is random. Actually, we’ve found it to be consistent in Bricolage (the relevant source code is here). That is, although we can’t predict which directives mod_perl will “upgrade”, it does tend to “upgrade” the same ones every time. This allows me to check the mod_perl version and try to do the right thing regardless. Maybe we’ll require mod_perl 1.27 when it finally comes out.

But at any rate, I’m glad to have the thing addressed and understood. It’s not common that I notice a bug in Perl or mod_perl, and it’s rewarding to see someone pick it up and address it quickly. Thanks Salvador!

Originally published on use Perl;

You’re So Vain

I’d just like to announce that I finally have a personal web page up. It has been years since I took down my “Hotlinks 1995!” page at U.Va., and the time finally arrived for me to have my own site again. Comments on the content and layout are most welcome.

Later, I’ll post a rant on how I exploited use Perl; for my own nefarious purposes.

Originally published on use Perl;

A mod_perl Bug?

I think I’ve found a bug in the <Perl> sections of mod_perl. More information can be found here. The upshot is that the Location directive, when used in <Perl> sections, seems to be used internally as a LocationMatch directive instead.

I ran into this because I was simplifying Bricolage’s Apache configuration. I moved the whole complex Apache configuration into a Bricolage module, so that it’s much simpler to configure Apache to run Bricolage — and much easier to use virtual hosts. (This is all in Bricolage’s CVS, BTW — it’s not yet released). I got around the problem by specifying all of my Location directives with a caret (^) prepended to them so that they behave like a regex version of Location (i.e., LocationMatch), but I’m kinda annoyed to have to do that. Am I right in thinking that the LocationMatch directives add a bit more overhead to every request?

Originally published on use Perl;

Software Development Methodology

I feel that it’s important to have a comprehensive approach to software development. It’s not enough to be good at coding, or testing, or writing documentation. It’s far better to excel at managing every step of the development process in order to ensure the quality and consistency of the end-to-end work as well as of the final product. I aim to do just that in my work. Here I briefly outline my methodology for achieving that aim.

First, good software development starts with good planning and research. I strive to attain a thorough understanding of what I’m developing by listening to the people to whom it matters most: the users. By gaining insight into how people in the target market think about the problem space, and by strategizing about how technology can address that space, a picture of the product takes shape. This research coalesces into a set of pragmatic requirements and goals that balance the demands of a realistic development schedule with the needs and desires of the target market.

Once the requirements have been identified, it’s time for prototyping. Task flow diagrams of user interactions model the entire system. Evaluations from the target market refine these schematics, shaping the look and feel of the final product. I cannot emphasize enough the importance of seeking market feedback to build solid and meaningful metaphors into the design. These concepts drive the user experience and make or break the success of the final product. The outcome of this feedback loop will be a UI, terminology, and object design grounded on intuitive concepts, scalable technologies, and a reliable architecture.

Next, a talented development team must be assembled and backed by a dependable, project management-oriented implementation infrastructure. Team-building is crucial for the success of any product, and in software development, a diverse set of engineers and specialists with complementary talents must come together and work as an efficient whole. As a result, I consider it extremely important to create a working culture of which team members want to be a part. Such an environment doesn’t foster a sense of entitlement, but rather of conviviality and excitement. If team members believe in what they’re doing, and they enjoy doing it, then they’re likely to do it well.

And what they’ll do is actually create the software. Each element of the product design must be broken down into its basic parts, fit into a generalizable design, and built back up into meaningful objects. I further require detailed documentation of every interface and implementation, as well as thorough unit testing. In fact, the tests are often written before the interfaces are written, ensuring that they will work as expected throughout the remainder of the development process. All aspects of the application must be implemented according to a scalable, maintainable methodology that emphasizes consistency, quality, and efficiency.

The emphasis on quality naturally continues into the quality assurance phase of the development process. The feature set is locked so that development engineers can work closely with QA engineers to test edge conditions, identify bugs, fix them, and ensure that they remain fixed. I prefer to have QA engineers punish nightly builds with suites of tests while development engineers fix the problems identified by previous days’ tests. QA is considered complete when the product passes all the tests we can dream up.

And finally, once all of the QA issues have been addressed, the final product is delivered. Naturally, the process doesn’t stop there, but starts over – in fact, it likely has already started over. New features must be schematically tested with likely users, and new interfaces designed to implement them. The idea is to end up with a solid product that can grow with the needs of the target market.

Looking for the comments? Try the old layout.

Tuesday’s SF.pm Meeting

Tuesday night was the monthly San Francisco Perl Mongers meeting. Invited to speak that night was none other than Randal L. Schwartz, who spoke on Perl 6. But there were a few last-minute surprise guests: Larry Wall and Chip Salzenberg! What with these three participating alongside SF.pm’s usual suspects (Vicki Brown, Rich, Morin, et al.), it was an evening of the heavyweights unlike we’ve never seen at SF.pm. Randal invited Larry to come while they were on the cruise last week, and it was great to have him there. I guess Randal never gave a talk like this with Larry in the audience before, because he said he was actually nervous, despite his years teaching!

A good time was had by all, I think, and we all learned a lot about Perl 6. Having Larry there to chime in (and read his own quotes from Randal’s slides!) really added to the content of the talk. I, for one, am incredibly excited about Perl 6. It’ll go a long way toward making the big database applications I’ve gotten into the habit of writing much easier to control. And the syntax Larry is outlining in his Apocalypses is really coming along. I just wish I didn’t have to wait for it until…when? There are 33 chapters in Camel trois, and we’re only up to [Apocalypse 4!]

At any rate, it was a great meeting. The after-dinner talk in the bar was good, too. Really a very entertaining buch. I’d publicly like to thank Quinn Weaver <zenji at gmx dot net> and Karen Andelin <kaan at chevrontexaco to com> for organizing a great event – and for getting other members to contribute en masse to the Perl Foundation 2002 Grant fund. And thanks also to Randal (who promised that Stonehenge Consulting Services would match the funds raised at the meeting), Larry, and Chip for making it a very worthwhile event!

Originally published on use Perl;

CVS Branching Philosophy

“This will go down in [my] permanent record,” eh? I guess I’d better make it good.

There’s been quite the debate going on over on the Bricolage developers list. For those who don’t know about Bricolage, its a full-featured, open-source, 100% Perl content management system that I maintain on SourceForge. You can learn more about it here.

Anyway, the debate is over the art and science of CVS management. We’ve been adding features to both minor and major releases up to now, but there has been substantial argument that the minor releases should be bug-fix only. The advantage of this approach is that new code won’t threaten stable releases. The disadvantage is that it could slow development. Quick and easy new features will have to wait for more involved features to be complete before they can see the light of day of a release.

There are some strong opinions, but I’m currently sitting on the fence. More opinions are welcome!

Originally published on use Perl;