This video, Timelapse of the Future, has kept me thinking ever since Kottke
posted it a few weeks ago. Given current knowledge, the expectation is that the
universe will go on forever, but thanks to entropy and expansion, it will
eventually be full of, well, nothing at all. This rather limits the time
hospitable to life. This arresting quotation from Brian Cox starting at the
12:55 mark captures it:
As a fraction of the lifespan of the universe, as measured from its beginning
to the evaporation of the last black hole, life, as we know it, is only
possible for one thousandth of a billion billon billonth, billion billon
billonth, billion billon billonth of a percent.
Boy howdy our time is limited. We should make the best of it, to let our brief
time be as pleasant, happy, and fulfilling as possible. All of us. Be kind,
empathetic, compassionate, and generous with your fellow human beings. In the end,
only how well we treat each other matters.
This Jan Wischweh piece surveying the recent literature on the the so-called
“agile crisis” is a bit of a slog, but these bits caught my attention:
One striking symptom of the Agile Crisis is the impositions of Agile on
teams, which seems to be a common practice today. If Agile is so great and
really gives more power and autonomy to the developers, why is it commonly
imposed by upper management?
Trust is the basis for any good communication. But Trust cannot be demanded.
It needs to be earned. This Problem is highly related to Agile as trust is
essential for any Agile team. But it can never be imposed.
And the issue of trust cannot be addressed without looking at the problem of
power. Agile, especially Scrum, is more about efficiency than about
empowering developers and it is not a shift away from Taylorism. On closer
inspection, this will be visible in every single conflict within companies
trying to transform towards Agile. Quite the opposite is true: it makes
people more replaceable and controllable and is a modern and competitive form
Indeed, management’s focus on process and reproducibility (as in Taylorism)
often drives the adoption of agile development processes. But truly autonomous
agile teams must be empowered to make their own decisions. That means inviting
them to adopt agile practices, rather than imposing those practices on them, and
it means trusting teams to make decisions.
In other words, unilaterally determining team composition, deciding that they’ll
do “agile” or “scrum” or “kanban”, and reserving the power to override their
decisions perpetuates a traditional focus on repetitive tasks and control,
rather than autonomy and craft. It demonstrates a lack of trust in the team, and
without that trust, the team won’t trust management, either — an untenable,
potentially catastrophic situation. No wonder “Agile” fails so often that we now
have an “agile crisis”.
I keep coming back to the fundamental idea that teams are made out of people,
and management should always support, promote, and empower the people in the
company with the autonomy to excel and to do their best work. People over
#1 rule: No one should ever be surprised with a “you’re fired.” That’s how you
create disgruntled employees, embarrassing Glassdoor reviews, dip in team
morale, etc. An out-of-the-blue firing is a failing on the manager’s part,
not the employees.
So how do you do that? The most important bit:
Give them a fair shot to improve. As a leader, it’s your job to try
to make it work, each employee is owed that.
Practice listening skills. Demonstrate that you believe in them, and you
want to see them improve. Commit to giving a LOT more feedback (specific &
If you have little faith that the employee will be able to improve, taking these
and the other steps Jennifer recommends might feel like a waste of time. But
unless the employee’s actions involve violence, harassment, fraud, etc., you
need to give them every chance possible for not only their benefit, but the
benefit of their coworkers. Of course you don’t mention it to your other
employees, but people talk, they know what’s going on, and they all need to know
that if they step out of line, you’ll support them as much as you can.
In other words, a firing should never come as a surprise to either the employee
getting the sack nor their coworkers. Because worse than negative Glassdoor
reviews is the erosion of trust among the people you continue to work with after
Tone is set from the top, they say. I once started a company and ran it for
10 years, but I rarely thought about leadership, let alone setting the tone. It
mattered little, since I was the sole employee for most of that time. Now I
ponder these topics a lot, as I watch leaders fail to consciously create and
promulgate an ethical organizational culture. I don’t mean they’re unethical,
though some might be. I mean they, like me, never gave it much thought, or
recognized its importance.
This myopia degrades long-term prospects, leaving employees and partners to
invent and impose their own interpretations of the organization’s nature,
motives, and goals. Without a clearly-defined direction, people make their own
way, and despite the best intentions, those ideas surely don’t quite align with
the underpinning ideas and expectations of leadership.
Next time I find myself in the position to shape an organization — found a
company, create a new group, organize a team — I will give careful thought to
these issues, and formalize them in foundational documents that provide focus
and direction for the duration. A sort of Organizational Constitution. And
like any constitution, its articles will both set the tone and encode the
Culture establishes an environment in which members of the organization feel
cared about, respected, valued, and physically and psychologically safe; where
they understand what they’re a part of and fulfilled by their roles. Culture
recognizes what people can contribute, and finds ways to let them do so. It lets
them know there’s a place for them, and that they actively contribute to the
Culture cannot be legislated, of course, but a preamble announces intentions,
sets the tone, and establishes the foundation on which the rest follows.
Article 1. Values
A clear articulation of the organization’s Values — its principals and
beliefs. These comprise both internal-facing expectations for members as well as
global values and beliefs defining the organization’s place and role in the
world. Ideally, they’re the same. Such values inform expectations for partners,
investors, customers, and users. Leadership must demonstrate these values in
their everyday work, and always be mindful of them when making decisions.
Examples of values meaningful to me include:
Diversity & Inclusivity
Empowering the disempowered
Making the world a better place
Advancing social justice
Doing the right thing
Making people happy/productive/empowered/independent/delighted
Article 2. Vision
The Vision lays out how the organization wants to make its dent in the
universe. It focuses on the future, and what the organization ultimately seeks
to become. It should align closely with the Values, bringing them to bear to
define the organization’s purpose, and describe the long-term, measurable goal.
The Vision answers questions such as:
What are our hopes and dreams?
What problem are we solving for the greater good?
Who and what are we inspiring to change?
Article 3. Mission
The Mission focuses on the now, and defines how the organization goes about
achieving its Vision. It must never contradict the Vision or Values; indeed,
they shape the Mission. It’s the core of the business, and from the Mission
come Strategy and Execution. A mission statement embodies the Mission by
answering questions such as:
What do we do?
Whom do we serve?
How do we serve them?
Article 4. Brand
Closely aligned with Values, the Brand defines the organization. The brand
commits to the Values, Vision, and Mission, recognized both internally and
externally, so that anyone can say what the organization stands for and how it
goes about achieving its goals. Decisions that might erode the Brand or violate
its underpinning Values must be avoided.
Article 5. Strategy
The Mission is the “what”; the Strategy is the “how”. The Strategy describes
how the organization intends to execute on its Mission to achieve its Vision. It
should be high-level but practical, goal-focused but not methodologically
imperative. It defines objectives that clearly demonstrate value for existing
and prospective constituents (customers, users, etc.) while adhering to — and
never corroding — the organization’s Values and Vision.
Article 6. Execution
Everyone in the organization should be aware of what the Strategy is, what its
objectives are, and how it furthers the Mission while adhering to its Values.
Recognition of and continual reinforcement of the Strategy and objectives
creates focus, providing a guide for decision-making. Ultimately, Execution
means delivery. It requires meaningful goals to fulfill the Strategy and the
achievement of its objectives: shipping product, meeting deadlines, effectively
promoting products and solutions, and acquiring happy constituents who enjoy the
fruits of the organizations, who derive benefit and value from them.
Article 7. Structure
The organization Structure must enable it to effectively execute the Strategy.
That means cohesive teams with with clear mandates and the focus and autonomy to
effectively execute. Strong coupling of deliverables across teams ought to be
minimized, but expert consultation should be provided where needed. Everyone in
the organization should be aware of the Structure, and understand their roles
and the roles of other teams.
Article 8. Communication
Leadership must be aware of all of the above tenets and invoke them them
regularly. Speak every day about what the organization believes in (Values),
what it wants to see in the world (Vision, Mission), and how it contributes to
making that world (Strategy, Execution). Communicate consistently and constantly
within the context of the products made and services provided — toward the
output of the Strategy, the organization’s deliverables. Demonstration of the
alignment of the Strategy to the Values of the organization must be continuous,
and always consulted when making decisions.
This Communication must be verbal, but also written. Guiding documents must
outline all of these aspects, and tie all the pieces together. So in addition to
the constitutional articles that define the Values, Vision, Mission, there must
be living documents that articulate the Strategy for achieving the Vision and
Mission. These includes road maps, planning documents, specifications,
promotional plans and materials, organizational structure, team and role
Pursuit of Happiness
Inconsistency of these articles abounds in the business world, since companies
seldom convene a constitutional convention to create them — but sometimes
because internal- and external-facing messaging varies. It need not be the case.
Perhaps working through these topics with a team will help constitute the
grounds on which the organization functions and presents itself to its members
and the world. Even if some members disagree with or are indifferent to some of
its tenets, all will appreciate the clarity and focus they engender. And an
organization with purpose gives its members purpose, meaning to their work, and
satisfaction in the furthering of the mission.
In just about any discussion of GDPR compliance, two proposals always come up:
disk encryption and network perimeter protection. I recently criticized the
focus on disk encryption, particularly its inability to protect sensitive data
from live system exploits. Next I wanted to highlight the deficiencies of
perimeter protection, but in doing a little research, I found that Goran Begic
has already made the case:
Many organizations, especially older or legacy enterprises, are struggling to
adapt systems, behaviors, and security protocols to this new-ish and ever
evolving network model. Outdated beliefs about the true nature of the network
and the source of threats put many organizations, their information assets,
and their customers, partners, and stakeholders at risk.
What used to be carefully monitored, limited communication channels have
expanded into an ever changing system of devices and applications. These
assets are necessary for your organization to do business—they are what allow
you to communicate, exchange data, and make business decisions and are the
vehicle with which your organization runs the business and delivers value to
Cloud computing and storage, remote workers, and the emerging preference for
micro-services over monoliths1 vastly complicate network
designs and erode boundaries. Uber-services such as Kubernetes recover some
control by wrapping all those micro-services in the warm embrace of a monolithic
orchestration layer, but by no means restore the simplicity of earlier times.
Once the business requires the distribution of data and services to multiple
data centers or geographies, the complexity claws its way back. Host your data
and services in the cloud and you’ll find the boundary all but gone. Where’s the
data? It’s everywhere.
In such an environment, staying on top of all the vulnerabilities — all the
patches, all the services listening on this network or that, inside some
firewall or out, accessed by whom and via what means — becomes exponentially
more difficult. Even the most dedicated, careful, and meticulous of teams sooner
or later overlook something. An unpatched vulnerability. An authentication bug
in an internal service. A rogue cloud storage container to which an employee
uploads unencrypted data. Any and all could happen. They do
happen. Strive for the best; expect the worst.
Because it’s not a matter of whether or not your data will be breached. It’s
simply a matter of when.
Unfortunately, compliance discussions often end with these two mitigations, disk
encryption and network perimeter protection. You should absolutely adopt them,
and a discussion rightfully starts with them. But then it’s not over. No, these
two basics of data protection are but the first step to protect sensitive data
and to satisfy the responsibility for security of processing (GDPR Article
32). Because sooner or later, no matter how comprehensive the data
storage encryption and firewalling, eventually there will be a breach. And then
“What next” bears thinking about: How do you further reduce risk in the
inevitable event of a breach? I suggest taking the provisions of the GDPR at
face value, and consider three things:
We cringed at the characterization of the Russian online influence campaign as
“sophisticated” and “vast”: Russian reporting on the matter—the best available
— convincingly portrayed the troll operation as small-time and ridiculous.
It was, it seems, fraudulent in every way imaginable: it perpetrated fraud on
American social networks, creating fake accounts and events and spreading
falsehoods, but it was also fraudulent in its relationship to whoever was
funding it, because surely crudely designed pictures depicting Hillary
Clinton as Satan could not deliver anyone’s money’s worth.
I think this is exactly right. So much of the coverage depicts the Russian
hacking as “vast” and “sophisticated”. As a technologist working in information
security, I find this framing irresponsible and naïve at best — complicit at
worst. (Sadly, even the former director of the CIA uses this framing.) The
techniques are those used for fraud, extortion, blackmail, and the like. They
effectively advance a criminal conspiracy because they’re simple; they exploit
human vulnerabilities. A far cry from clandestine government surveillance or
espionage, the point is disinformation for the benefit of a very few. Painting
it as “massive” or “advanced” only increases its effectiveness.
That’s just one aspect of the problematic coverage. Gessen also brings a
sociological perspective to bear: The Russian government and its cohort more
closely approximates a “Mafia state” than a dictatorship. A press that
understands the difference will cover these people not as heads of state, but as
criminals who happen to control states. I hope some, at least, take it to heart.
I’ve been thinking a lot about what creative professionals want and expect out
of their jobs. We require certain base features of a job, the absolute minimum
for even considering employment:
Fair, livable compensation for the work
Comprehensive, low maintenance, effective benefits (especially health care)
Equitable work hours and conditions (vacation time, work/life balance)
Safe work environment
Employers attempting to skimp on any of these items devalue the people they
employ and the work they do. Don’t do that.
Assuming an organization meets these fundamentals, what else gets people
excited to go to work? What makes employees happy, committed, and productive
members of the team? Fortunately, I’m far from the first to explore this topic.
Paloma Medina reduces the literature to the muscular acronym BICEPS:
There are six core needs researchers find are most important for humans at
work. Not all are equally important to everyone. You might find that equity
and belonging are most important to you, but choice and status are most
important to your employee. Getting to know them and coaching to them is a
shortcut to making others feel understood and valued (aka inclusivity).
The BICEPS core needs:
Beyond the utility of having these needs enumerated to think about collectively
— with obvious implications — I find it useful to examine them from varying
frames of references. To that end, consider each from the perspective not of
rewards and perks, certificates and foosball tables. Ponder them with the goal
of creating a virtuous cycle, where the work improves the company, engendering
greater satisfaction in the work, and encouraging more of the same.
Organizations serious about encouraging friendships and closeness often
highlight social gatherings, team-building exercises, and outings. But don’t
underestimate the motivation of the work. Small teams given the space to
collaborate and accomplish their goals might be the best structure to create a
sense of belonging to a tight-knit group — and for employees to find joy in
Then reward those accomplishments. Not just with compensation or perks. No. Put
the full force of the business behind them. If a team finished work on a feature
or shipped a product, don’t limit recognition to a cocktail hour and a raised
toast. Promote the hell out of it through all available channels: marketing,
sales, blogging, support, community forums, whatever. The surest road to
satisfaction and a sense of belonging is to turn that work into a palpable
success for the organization.
Funds for conferences, training, and formal education clearly help employees
make progress in their careers, or to sipmly improve themselves. But people
also get satisfaction from work that helps the company to execute its
strategies and meet its goals. Assuming the vision aligns with an employee’s
values,1 contributing to the material achievement of
that vision becomes the employee’s achievement, too.
So be sure to create opportunities for all employees to grow, both in their
careers and contributions to the company mission. Avoid artificial divides
between those who make the execute and those who support them. Not everyone
will participate; still, encourage ideas and suggestions from all quarters and,
where possible, adopt them. Beyond the old canard to “act like an owner”,
clearly link organizational success to the ideas and work that created it, and
give everyone the chance to make a difference. They improve as the business
improves, and that’s progress.
Typically, “choice” means different healthcare plans, Mac or PC, sitting or
standing desk. Such perks are nice, but not materially meaningful.2
The choices that warm the creative worker’s heart have much more to do with
autonomy and decision-making than fringe benefits. Let teams choose their
projects, decide on technologies, self-organize, make the plans to execute.
People empowered to take initiative and make decisions without micromanagement
or post-hoc undermining find motivation and reward in the work itself. Let them
Yes, grant employees equal access to resources, to management, to the
decision-making process, and any other information necessary for their work,
benefits, etc. That only stands to reason. But give them equal access to
interesting work, too. Where possible, avoid unilaterally appointing people to
teams or projects: let them organically organize and pick their collaborators
and projects. Such decisions mustn’t be made in isolation; it wouldn’t be fair.
Rather, you’ll need to hold regular get-togethers of all relevant teams to make
such decisions collectively, PI Planning-style. Give everyone a voice, leave
no one out, and they will mostly work out the optimal distribution of tasks.
In addition to paying employees on time, every two weeks, make the work cycle
predictable, too. Everyone should have a good idea when things happen, what the
iteration cycle looks like, what the steps are and when they get slotted into
the schedule, when projects complete and products ship. Just as importantly,
make it clear what will they be working on next – or at least what’s in the
pipeline for the teams to choose and plan for in the next iteration of the
development process. A predictable cadence for the work lets people understand
where they are at any given time, what’s next, and what successful execution
Titles and industry recognition, obviously, but this item brings my commentary
full circle. Make sure that the work employees do gets seen not only by
immediate managers, not simply lauded at the weekly dessert social. Make it a
part of the success of the company. Promote the hell out of it, let customers
and users know that it exists and solves their problems — no, show them —
and shout it from the rooftops so the entire world know about all the stuff
made by your super valuable team of humans.
They’ll be happier, more satisfied, and ready to make the next success.
A very big assumption indeed. I expect to write a
bit about company strategies and alignment to employee values soon. ↩
Okay, sometimes a choice is no choice at all. Mac or nothing
for me! ↩
Full disk encryption provides incredible data protection for personal devices.
If you haven’t enabled FileVault on your Mac, Windows Device Encryption on
your PC, or Android Device Encryption on your phone, please go do it now (iOS
encrypts storage by default). It’s easy, efficient, and secure. You will likely
never notice the difference in usage or performance. Seriously. This is a
Once enabled, device encryption prevents just about anyone from accessing device
data. Unless a malefactor possesses both device and authentication credentials,
the data is secure.
Periodically, vulnerabilities arise that allow circumvention of device
encryption, usually by exploiting a bug in a background service. OS vendors tend
to fix such issues quickly, so keep your system up to date. And if you work in
IT, enable full disk encryption on all of your users’ devices and drives. Doing
so greatly reduces the risk of sensitive data exposure via lost or stolen
Servers, however, are another matter.
The point of disk encryption is to prevent data compromise by entities with
physical access to a device. If a governmental or criminal organization takes
possession encrypted storage devices, gaining access to an of the data presents
an immense challenge. Their best bet is to power up the devices and scan their
ports for potentially-vulnerable services to exploit. The OS allows such
services transparent access the file system via automatic decryption. Exploiting
such a service allows access to any data the service can access.
But, law enforcement investigations aside,1 who bothers
with physical possession? Organizations increasingly rely on cloud providers
with data distributed across multiple servers, perhaps hundreds or thousands,
rendering the idea of physical confiscation nearly meaningless. Besides, when
exfiltration typically relies on service vulnerabilities, why bother taking
possession hardware at all? Just exploit vulnerabilities remotely and leave the
Which brings me to the issue of compliance. I often hear IT professionals assert
that simply encrypting all data at rest2 satisfies the
responsibility to the security of processing (GDPR Article 32). This
interpretation may be legally correct3 and relatively
straight-forward to achieve: simply enable [disk encryption], protect the keys
via an appropriate and closely-monitored key management system, and migrate data
to the encrypted file systems.4
This level of protection against physical access is absolutely necessary for
protecting sensitive data.
Necessary, but not sufficient.
When was the last time a breach stemmed from physical access to a server? Sure,
some reports in the list of data breaches identify “lost/stolen media” as the
beach method. But we’re talking lost (and unencrypted) laptops and drives. Hacks
(service vulnerability exploits), accidental publishing,5
and “poor security” account for the vast majority of breaches. Encryption of
server data at rest addresses none of these issues.
By all means, encrypt data at rest, and for the love of Pete please keep your
systems and services up-to-date with the latest patches. Taking these steps,
along with full network encryption, is essential for protecting sensitive data.
But don’t assume that such steps adequately protect sensitive data, or that
doing so will achieve compliance with GDPR Article 32.
Don’t simply encrypt your disks or databases, declare victory, and go home.
Bear in mind that data protection comes in layers, and those layers correspond
to levels of exploitable vulnerability. Simply addressing the lowest-level
requirements at the data layer does nothing to prevent exposure at higher
levels. Start disk encryption, but then think through how best to protect data
at the application layer, the API layer, and, yes, the human layer, too.
Presumably, a legitimate law enforcement
investigation will compel a target to provide the necessary credentials to
allow access by legal means, such as a court order, without needing to
exploit the system. Such an investigation might confiscate systems to
prevent a suspect from deleting or modifying data until such access can be
compelled — or, if such access is impossible (e.g., the suspect is
unknown, deceased, or incapacitated) — until the data can be forensically
The result is the the sample project winperl-travis. It demonstrates three
.travis.yml configurations to test Perl projects on Windows:
Use Windows instead of Linux to test multiple versions of Perl. This is the
simplest configuration, but useful only for projects that never expect to
run on a Unix-style OS.
Add a Windows build stage that runs the tests against the latest version
of Strawberry Perl. This pattern is ideal for projects that already test
against multiple versions of Perl on Linux, and just want to make sure
things work on windows.
The files starts with the typical Travis Perl configuration: select the
language (Perl) and the versions to test. The before_install block installs a
couple of dependencies and executes the travis-perl helper for more flexible
Perl testing. This pattern practically serves as boilerplate for new Perl
The new bit is the jobs.include section, which declares a new build stage
named “Windows”. This stage runs independent of the default phase, which runs on
Linux, and declares os: windows to run on Windows.
The before_install step uses the pre-installed Chocolatey package manager to
install the latest version of Strawberry Perl and update the $PATH
environment variable to include the paths to Perl and build tools. Note that the
Travis CI Window environment runs inside the Git Bash shell environment; hence
the Unix-style path configuration.
The install phase installs all dependencies for the project via cpanminus, then
the script phase runs the tests, again using cpanminus.
And with the stage set, the text-markup build has a nice new stage that ensures
all tests pass on Windows.
The use of cpanminus, which ships with Strawberry Perl, keeps things simple,
and is essential for installing dependencies. But projects can also perform the
usual gmake test1 or perl Build.PL && ./Build test
dance. Install Dist::Zilla via cpanminus to manage dzil-based projects.
Sadly, prove currently does not work under Git Bash.2
Perhaps Travis will add full Perl support and things will become even easier.
In the meantime, I’m pleased that I no longer have to guess about Windows
compatibility. The new Travis Windows environment enables a welcome increase in
Although versions of Strawberry Perl prior to
5.26 have trouble installing Makefile.PL-based modules, including
dependencies. I spent a fair bit of time trying to work out how to make it
work, but ran out of steam. See issue #1 for details. ↩
Ahead of the release of Sqitch v1.0 sometime in 2019, I’d like to remove all the
deprecated features and code paths. Before I do, I want to get a sense of the
impact of such removals. So here’s a comprehensive list of the deprecations
currently in Sqitch, along with details on their replacements, warnings, and
updates. If the removal of any of these items would create challenges for your
use Sqitch, get in touch.
What would be removed:
The core configuration and directory-specification options and attributes:
The preferred solution is configuration values at the target, engine, or
core level (settable via the options on the target, engine, and init
commands, or via the config command).
But I admit that there are no overriding options for the directory
configurations in the deploy/revert/verify/rebase/checkout commands. And
I’ve used --top-dir quite a lot myself! Perhaps those should be added
first. If we were to add those, I think it’d be okay to remove the core
options — especially if I ever get around to merging options to allow both
core and command options to be specified before or after the command name.
The @FIRST and @LAST symbolic tags, which were long-ago supplanted by
the more Git-like @ROOT and @HEAD, and warnings have been emitted for at
least some of their uses for six years now.
Engine configuration under core.$engine. This configuration was supplanted
by engine$engine four years ago, and came with warnings, and a fix via the
sqitch engine update-config action. That action would also go away.
The core database connection options:
These options were supplanted by database URIs over four years ago. At that time, they were adapted to override parts of target URIs. For example, if you have a target URI of db:pg://db.example.com/flipr, you can specify that target, but then also pass --db-name to just change the database name part of the URI. I’ve found this occasionally useful, but I don’t think the complexity of the implementation is worth it.
The old target options, which were renamed “change” targets back when the
term “target” was adopted to refer to databases rather than changes. Sqitch
has emitted warnings for five years when the old names were used:
The --onto-target and --upto-target options on rebase were renamed
--onto-change and --upto-change.
The --to-target and --target options on deploy and revert were
The --from-target and --to-target options on verify were renamed
--from-change and --to-change.
The script-generation options on the add command were deprecated four
years ago in favor of --with and --without options, with warnings for
the old usages:
--deploy became --with deploy
--revert became --with revert
--verify became --with verify
--no-deploy became --without deploy
--no-revert became --without revert
--no-verify became --without verify
The same change replaced the template-specification options with a single
--deploy-template $path became --use deploy=$path
--revert-template $path became --use revert=$path
--verify-template $path became --use verify=$path
The corresponding config variables, add.deploy_template,
add.revert_template, and add.verify_template were replaced with a config
section, add.templates. No warnings were issued for the old names, though.
The set-* actions on the engine and target commands were replaced
three years ago (engine change, target change) with a single alter
action, with warnings, and able to be passed multiple times:
set-target became alter target
set-uri became alter uri
set-registry became alter registry
set-client became alter client
set-top-dir became alter top-dir
set-plan-file became alter plan-file
set-deploy-dir became alter deploy-dir
set-revert-dir became alter revert-dir
set-verify-dir became alter verify-dir
set-extension became alter extension
The data hashed to create change IDs was modified six years ago. At that
time, code was added to update old change IDs in Postgres databases; no
other engines were around at the time.
If removing any of these features would cause trouble for you or the organizations you know to be using Sqitch, please get in touch.
[Fascism’s] ideas are enacted first and foremost upon the bodies
and lives of the people whose presence within “our” national domain
is prohibitive. In Bannon/Trump’s case, that domain is nativist and
white. Presently, their ideas are inflicted upon people of color and
immigrants, who do not experience them as ideas but as violence. The
practice of fascism supersedes its ideas, which is why people affected
and diminished by it are not all that interested in a marketplace of
ideas in which fascists have prime purchasing power.
The error in Bannon’s headlining The New Yorker Festival would not
have been in giving him a platform to spew his hateful rhetoric, for
he was as likely to convert anyone as he himself was to be shown the
light in conversation with Remnick. The catastrophic error would’ve
been in allowing him to divorce his ideas from the fascist practices
in which they’re actualized with brutality.
The New Yorker Festival bills itself as “Conversations on culture and
politics,” but the important thing to understand about fascism — and its
cohorts nationalism and white supremacism — is that it’s not a conversation.
It’s not a set of ideas. No. Fascism is violence. One does not dialog with
fascism. Fascism is a violent terror to be stopped.
I released Sqitch v0.9998 this week. Despite the long list of changes,
only one new feature stands out: support for the Snowflake Data Warehouse
platform. A major work project aims to move all of our reporting data from
Postgres to Snowflake. I asked the team lead if they needed Sqitch support, and
they said something like, “Oh hell yes, that would save us months of
work!” Fortunately I had time to make it happen.
Snowflake’s SQL interface ably supports all the functionality required for
Sqitch; indeed, the implementation required fairly little customization. And
while I did report a number of issues and shortcomings to the Snowflake support
team, they always responded quickly and helpfully — sometimes revealing
undocumented workarounds to solve my problems. I requested that they be
The work turned out well. If you use Snowflake, consider managing your databases
with Sqitch. Start with the tutorial to get a feel for it.
Of course, you might find it a little tricky to get started. In addition to long
list of Perl dependencies, each database engines requires two external
resources: a command-line client and a driver library. For Snowflake, that means
the SnowSQL client and the ODBC driver. The PostgreSQL engine requires
psql and DBD::Pg compiled with libpq. MySQL calls for the mysql client
and DBD::mysql compiled with the MySQL connection library. And so on. You
likely don’t care what needs to be built and installed; you just want it to
work. Ideally install a binary and go.
I do, too. So I spent the a month or so building Sqitch bundling support, to
easily install all its Perl dependencies into a single directory for
distribution as a single package. It took a while because, sadly, Perl provides
no straightforward method to build such a feature without also bundling unneeded
libraries. I plan to write up the technical details soon; for now, just know
that I made it work. If you Homebrew, you’ll reap the benefits in your next
brew install sqitch.
Pour One Out
In fact, the bundling feature enabled a complete rewrite of the Sqitch Homebrew
tap. Previously, Sqitch’s Homebrew formula installed the required modules in
Perl’s global include path. This pattern violated Homebrew best practices, which
prefer that all the dependencies for an app, aside from configuration, reside in
a single directory, or “cellar.”
The new formula follows this dictum, bundling Sqitch and its CPAN dependencies
into a nice, neat package. Moreover, it enables engine dependency selection at
build time. Gone are the separate sqitch_$engine formulas. Just pass the
requisite options when you build Sqitch:
In fact, the old sqitch_oracle formula hasn’t worked in quite some time, but
the new $HOMEBREW_ORACLE_HOME environment variable does the trick (provided
you disable SIP; see the instructions for details).
I recently became a Homebrew user myself, and felt it important to make Sqitch
build “the right way”. I expect this formula to be more reliable and better
maintained going forward.
Still, despite its utility, Homebrew Sqitch lives up to its name: It downloads
and builds Sqitch from source. To attract newbies with a quick and easy method
to get started, we need something even simpler.
Dock of the Bae
Which brings me to the installer that excites me most: The new Docker image.
Curious about Sqitch and want to download and go? Use Docker? Try this:
That’s it. On first run, the script pulls down the Docker image, which
includes full support for PostgreSQL, MySQL, Firebird, and SQLite, and weighs in
at just 164 MB (54 MB compressed). Thereafter, it works just as if Sqitch was
locally-installed. It uses a few tricks to achieve this bit of magic:
It mounts the current directory, so it acts on the Sqitch project you
intend it to
It mounts your home directory, so it can read the usual configuration files
The script even syncs your username, full name, and host name, in case you
haven’t configured your name and email address with sqitch config. The only
outwardly obvious difference is the editor:1 If you add
a change and let the editor open, it launches nano rather than your preferred
editor. This limitation allows the image ot remain as small as possible.
I invested quite a lot of effort into the Docker image, to make it as small as
possible while maximizing out-of-the-box database engine support — without
foreclosing support for proprietary databases. To that end, the repository
already contains Dockerfiles to support Oracle and Snowflake: simply
download the required binary files, built the image, and push it to your private
registry. Then set $SQITCH_IMAGE to the image name to transparently run it
with the magic shell script.
I plan to put more work into the Sqitch Docker repository over the next few
months. Exasol and VerticaDockerfiles come next. Beyond that, I envision
matrix of different images, one for each database engine, to minimize download
and runtime size for folx who need only one engine — especially for production
deployments. Adding Alpine-based images also tempts me; they’d be even
smaller, though unable to support most (all?) of the commercial database
engines. Still: tiny!
Container size obsession is a thing, right?
At work, we believe the future of app deployment and execution belongs to
containerization, particularly on Docker and Kubernetes. I presume that
conviction will grant me time to work on these improvements.
Well, that and connecting to a service on your
host machine is a little fussy. For example, to use Postgres on your local
host, you can’t connect to Unix sockets. The shell script
enables host networking, so on Linux, at least, you should be able to
connect to localhost to deploy your changes. On macOS and Windows, use
the host.docker.internal host name. ↩
The farce of consent as currently deployed is probably doing more harm as it
gives the misimpression of meaningful control that we are guiltily ceding
because we are too ignorant to do otherwise and are impatient for, or need,
the proffered service. There is a strong sense that consent is still
fundamental to respecting people’s privacy. In some cases, yes, consent is
essential. But what we have today is not really consent.
It still feels pretty clear-cut to me. I chose to check the box.
Think of it this way. If I ask you for your ZIP code, and you agree to give it
to me, what have you consented to?
I’ve agreed to let you use my ZIP code for some purpose, maybe marketing.
Maybe. But did you consent to share your ZIP code with me, or did you consent
to targeted marketing? I can combine your ZIP code with other information I
have about you to infer your name and precise address and phone number. Did
you consent to that? Would you? I may be able to build a financial profile of
you based on your community. Did you consent to be part of that? I can target
political ads at your neighbors based on what you tell me. Did you consent to
Well this is some essential reading for data privacy folx. Read the whole thing.
I myself have put too much emphasis on consent when thinking about and working
on privacy issues. The truth is we need to think much deeper about the context
in which consent is requested, the information we’re sharing, what it will be
used for, and — as Nissenbaum describes in this piece — the impacts on
individuals, society, and institutions. Adding her book to my reading list.
The Code as a whole is concerned with how fundamental ethical principles apply
to a computing professional’s conduct. The Code is not an algorithm for
solving ethical problems; rather it serves as a basis for ethical
decision-making. When thinking through a particular issue, a computing
professional may find that multiple principles should be taken into account,
and that different principles will have different relevance to the issue.
Questions related to these kinds of issues can best be answered by thoughtful
consideration of the fundamental ethical principles, understanding that the
public good is the paramount consideration. The entire computing profession
benefits when the ethical decision-making process is accountable to and
transparent to all stakeholders. Open discussions about ethical issues promote
this accountability and transparency.
The principles it promotes include “Avoid harm”, “Be honest and trustworthy”,
“Respect privacy”, and “Access computing and communication resources only when
authorized or when compelled by the public good”. The ACM clearly invested a lot
of time and thought into the updated code. Thinking about joining, just to
support the effort.
Mixing singular and plural is pretty common in most people’s speech and even writing:
“The analysis of all the results from five experiments support that claim.”
And one common expression mixing singular and plural even sounds a lot like
“They is” (and is often pronounced that way):
“There’s two kinds of people in this world.”
“There’s lots of reasons we shouldn’t go to that party.”
So maybe it won’t sound so weird after all:
“Sam volunteers at the homeless shelter. They’s someone I really admire.”
Some varieties of English already match plural “they” with a singular verb:
“they wasn’t satisfied unless I picked the cotton myself” (Kanye West line
in New Slaves) “They is treatin’ us good.” (Dave Chappelle Terrorists on the Plane
routine) “They wasn’t ready.” (Bri BGC17 commenting on Oxygen Bad Girls Club
So why not singular “they” with a singular verb?
“They wasn’t going to the party alone.”
Using singular verbs when we’re using “they” to refer to one person might not
be so weird after all.
We have the same issue in some ways with singular “you.” Standard English
varieties tend to use a plural verb even with singular “you.” So “you are a
fine person,” not “you is a fine person.”
Except lots of varieties and lots of speakers do use “you is.”
I’ve been thinking about this piece a lot since I read it a couple weeks ago. It
wasn’t that long ago that I surrendered to using “they” as a gender-neutral
singular pronoun; today I’m embarrassed that my grammarian elitism kept me from
denying it for so long. No more of that! If anyone asks me to use singular verbs
with singular “they”, I’ll happily do it — and correct my habitual mouth when
it fails me. For they who wants to be referred to using singular verbs, I will
use singular verbs.
Intellectually, I find this whole idea fascinating, mostly because it never
occurred to me, feels unnatural in my stupid mouth, but seems so obvious given
the examples. Some dialects have used this form since forever; the internal
logic is perfect, and only cultural elitism and inertia have repressed it. They
wasn’t satisfied indeed.
But logic is a flexible thing, given varying semantics. In an addendum to the
piece, Amy writes:
Edited: Chatting with my linguist friends Anne, Peter, and Jim gave me a new
way to talk about this topic. The form of the verb “are” (“They are”) might be
plural, but in the context of a singular “they” the verb would have singular
meaning, too. We do that with “you.” You are a good friend, Sue. The “are” is
singular just as the “you” is. So if we do start using “they” as the sole
singular pronoun, we wouldn’t have to change the form of the verb to make it
singular. It would already be heard as singular.
We are creative and flexible in using language. What a wonderful thing!
So just as “they” can be used as a singular pronoun, plural conjugations become
singular when used with singular “they” if we just say they are. Everybody’s
right! So make a habit of using the most appropriate forms for your audience.
For years, I’ve managed multiple versions of PostgreSQL by regularly editing and
running a simple script that builds each major version from source and
installs it in /usr/local. I would shut down the current version, remove the
symlink to /usr/local/pgsql, symlink the one I wanted, and start it up again.
This is a pain in the ass.
Recently I wiped my work computer (because reasons) and started reinstalling all
my usual tools. PostgreSQL, I decided, no longer needs to run as the postgres
user from /usr/local. What would be much nicer, when it came time to test
pgTAP against all supported versions of Postgres, would be to use a tool like
plenv or rbenv to do all the work for me.
So I wrote pgenv. To use it, clone it into ~/.pgenv (or wherever you want)
and add its bin directories to your $PATH environment variable:
$ pgenv use 10.4
The files belonging to this database system will be owned by user "david".
This user must also own the server process.
# (initdb output elided)
waiting for server to start.... done
PostgreSQL 10.4 started
$ psql -U postgres
Type "help" for help.
Easy. Each version you install – as far back as 8.0 – has the default super
user postgres for compatibility with the usual system-installed version. It
also builds all contrib modules, including PL/Perl using /usr/bin/perl.
With this little app in place, I quickly built all the versions I need. Check it
Other commands include start, stop, and restart, which act on the
currently active version; version, which shows the currently-active version
(also indicated by the asterisk in the output of the versions command);
clear, to clear the currently-active version (in case you’d rather fall back
on a system-installed version, for example); and remove, which will remove a
version. See the docs for details on all the commands.
How it Works
All this was written in an uncomplicated Bash script. I’ve ony tested it on a
couple of Macs, so YMMV, but as long as you have Bash, Curl, and /usr/bin/perl
on a system, it ought to just work.
How it works is by building each version in its own directory:
~/.pgenv/pgsql-10.4, ~/.pgenv/pgsql-11beta2, and so on. The currently-active
version is nothing more than symlink, ~/.pgenv/pgsql, to the proper version
directory. There is no other configuration. pgenv downloads and builds versions
in the ~/.pgenv/src directory, and the tarballs and compiled source left in
place, in case they’re needed for development or testing. pgenv never uses them
again unless you delete a version and pgenv build it again, in which case
pgenv deletes the old build directory and unpacks from the tarball again.
Works for Me!
Over the last week, I hacked on pgenv to get all of these commands working. It
works very well for my needs. Still, I think it might be useful to add support
for a configuration file. It might allow one to change the name of the default
superuser, the location Perl, and perhaps a method to change postgresql.conf
settings following an initdb. I don’t know when (or if) I’ll need that stuff,
though. Maybe you do, though? Pull requests welcome!
But even if you don’t, give it a whirl and let me know if you find any