Just a Theory

Black lives matter

Harlem Park Steps

Test Extensions With GitHub Actions

I first heard about GitHub Actions a couple years ago, but fully embraced them only in the last few weeks. Part of the challenge has been the paucity of simple but realistic examples, and quite a lot of complicated-looking JavaScript-based actions that seem like overkill. But through trial-and-error, I figured out enough to update my Postgres extensions projects to automatically test on multiple versions of Postgres, as well as to bundle and release them on PGXN. The first draft of that effort is pgxn/pgxn-tools 1, a Docker image with scripts to build and run any version of PostgreSQL between 8.4 and 12, install additional dependencies, build, test, bundle, and release an extension.

Here’s how I’ve put it to use in a GitHub workflow for semver, the Semantic Version data type:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
name: CI
on: [push, pull_request]
jobs:
  test:
    strategy:
      matrix:
        pg: [12, 11, 10, 9.6, 9.5, 9.4, 9.3, 9.2]
    name: 🐘 PostgreSQL ${{ matrix.pg }}
    runs-on: ubuntu-latest
    container: pgxn/pgxn-tools
    steps:
      - run: pg-start ${{ matrix.pg }}
      - uses: actions/checkout@v2
      - run: pg-build-test

The important bits are in the jobs.test object. Under strategy.matrix, which defines the build matrix, the pg array defines each version to be tested. The job will run once for each version, and can be referenced via ${{ matrix.pg }} elsewhere in the job. Line 10 has the job a pgxn/pgxn-tools container, where the steps run. The are are:

  • Line 12: Install and start the specified version of PostgreSQL
  • Line 13: Clone the semver repository
  • Line 14: Build and test the extension

The intent here is to cover the vast majority of cases for testing Postgres extensions, where a project uses PGXS Makefile. The pg-build-test script does just that.

A few notes on the scripts included in pgxn/pgxn-tools:

  • pg-start installs, initializes, and starts the specified version of Postgres. If you need other dependencies, simply list their Debian package names after the Postgres version.

  • pgxn is a client for PGXN itself. You can use it to install other dependencies required to test your extension.

  • pg-build-test simply builds, installs, and tests a PostgreSQL extension or other code in the current directory. Effectively the equivalent of make && make install && make installcheck.

  • pgxn-bundle validates the PGXN META.json file, reads the distribution name and version, and bundles up the project into a zip file for release to PGXN.

  • pgxn-release uploads a release zip file to PGXN.

In short, use the first three utilities to handle dependencies and test your extension, and the last two to release it on PGXN. Simply set GitHub secrets with your PGXN credentials, pass them in environment variables named PGXN_USERNAME and PGXN_PASSWORD, and the script will handle the rest. Here’s how a release job might look:

15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
  release:
    name: Release on PGXN
    # Release pon push to main when the test job succeeds.
    needs: test
    if: github.ref == 'refs/heads/main' && github.event_name == 'push' && needs.test.result == 'success'
    runs-on: ubuntu-latest
    container:
      image: pgxn/pgxn-tools
      env:
        PGXN_USERNAME: ${{ secrets.PGXN_USERNAME }}
        PGXN_PASSWORD: ${{ secrets.PGXN_PASSWORD }}
    steps:
      - name: Check out the repo
        uses: actions/checkout@v2
      - name: Bundle the Release
        run: pgxn-bundle
      - name: Release on PGXN
        run: pgxn-release

Note that lines 18-19 require that the test job defined above pass, and ensure the job runs only on a push event to the main branch, where we push final releases. We set PGXN_USERNAME and PGXN_PASSWORD from the secrets of the same name, and then, in lines 27-32, check out the project, bundle it into a zip file, and release it on PGXN.

There are a few more features of the image, so read the docs for the details. As a first cut at PGXN CI/CD tools, I think it’s fairly robust. Still, as I gain experience and build and release more extensions in the coming year, I expect to work out integration with publishing GitHub releases, and perhaps build and publish relevant actions on the GitHub Marketplace.


  1. Not a great name, I know, will probably change as I learn more. ↩︎

Valerie Wheeler

Valerie Wheeler

CSUS Anthropology department picnic, 1991

Two years ago, my undergraduate Anthropology advisor, Professor Valerie Wheeler, died after a sudden and brief battle with leukemia. She played a vital role in my academic and personal life, and her passing affected me deeply. Here’s the note I left on her Legacy page.

I’m deeply saddened by Valerie’s passing. She was more than just my undergraduate advisor in the late 80s and early 90s, but also “Mom” to me and a slew of my fellow anthropology majors. Such a great, inspiring teacher, who taught me to view the world through the lens of culture, so that I saw it in a completely new way. It was eye-opening, and had such an impact that I could never close them again. And for that, I wanted to do well, to make her proud. She expected a lot, and I wanted to meet those expectations. It made me a better person, a more thoughtful person. And I could not appreciate it more. I’m sorry not to have kept in better touch, and so sad that the world has lost such a compassionate, wonderful person. I will carry some of that with me for the remainder of my life. Valerie’s impact was great, and I’m grateful to have had her in my life. My condolences to her family.

Antigone’s Voice

First of all, No

Photo by Jazmin Quaynor on Unsplash

A couple months back, I saw Antigone in Ferguson at St. Ann’s Church in Brooklyn. The project pairs dramatic readings of Sophocles’ Antigone with a moving choral arrangement performed by a diverse cast of activists, students, police officers, and Ferguson & NYC community members. I don’t tend to go for gospel, but these stunning voices shot through me like a revelation. The powerful, vulnerable expression of the human voice — a profound manifestation of the human capacity for creativity and beauty — broke my heart and raised my optimism for humanity. I got a lot out of the discussion of racial injustice following the performance, too. Don’t miss it if you get the chance.

The relationship between Antigone and King Creon struck me in a new way, no doubt because recent encounters with autocratic and sexist behaviors have been front-of-mind lately.

The play depicts Creon as the relatively thoughtful king of Thebes and doting uncle to Antigone and her sister, Ismene. He forbade the burying of Polynices while still in the heat of the just-ended civil war, and, despite his advisors’ best arguments, stubbornly rejects revoking the law. His inability to admit mistake despite the clarity of its recklessness — even to his own mind — exemplifies classic authoritarian behavior: never admit error. Naturally it leads to his downfall.

While Creon’s advisors appeal for revocation of the law, his niece, Antigone, refuses to submit to it, and declines to compromise her integrity or the tone of her voice when speaking against it. She deliberately flaunts the law by burying her brother, makes makes no attempt to deny it, and angrily excoriates Creon at her very public trial. His intransigence will be his downfall, she proclaims. The girl doesn’t sugar-coat it, and in her passion, her voice rings out loud and true to all assembled.

But, to Creon’s ear, shrilly.

Stung by Antigone’s passionate defiance, Creon finds her tone reason enough to ignore the substance of her argument, to dismiss the risks she highlights. The entire community rallies to her cry, but Creon, blinded by his bruised ego, commits to his folly and sentences Antigone to death. It is his undoing, and a tragedy for Thebes.

One can learn a lot from Creon.

How often do we discount a woman’s message because of her tone? When passion speaks truth to power, what reasons do we find to dismiss it? When someone cares, but poor leadership prevents understanding and growth, does the anger, the passion, the righteousness wound the ego or motivate action?

Passion is a virtue, and tone a reflection of commitment. People who care about their world — their work, their environment, their society — will be angry when it fails them. If it stings when they tell you that you’ve made mistakes, that you’re failing them and other people, put ego aside, recognize that your implicit biases might seek reason to dismiss them, and instead, simply listen.

My fellow white dudes, don’t be like Creon. Recognize your fallibility, head off your urge to disregard feedback in the name of your discomfort, and beware the ego reflexively assigning labels such as “whiner”, or “negative nelly”, or even “not focused on problem-solving”. This, too, tells you something. Because the people who don’t care say nothing. Those with the passion to speak and the willingness to do so expect more of you. They may be disappointed in you or your actions, but want you to take the opportunity to be better, to admit and rectify your mistakes, and to set things on a better path.

So maybe let’s take a stab at meeting their expectations.

Sqitch v1.0.0

Sqitch Logo

After seven years of development and hundreds of production database deployments, I finally decide it was time to release Sqitch v1.0.0, and today’s the day. I took the opportunity to resolve all known bugs in previous releases, so there’s no new functionality since v0.9999. Still, given the typical attention given to a significant milestone release like 1.0.0, my employer published a blog post describing a bit of the history and philosophy behind Sqitch.

The new site goes into great detail describing how to install Sqitch, but the important links are:

  • CPAN — Install Sqitch from CPAN
  • Docker — Run Sqitch from a Docker container
  • Homebrew — Homebrew Sqitch on macOS
  • GitHub — Sqitch releases on GitHub

Thanks to everyone who helped get Sqitch to this point, I appreciate it tremendously. I’m especially grateful to:

Thanks a million for all your help and support!

Impeach

Dan Pfeiffer’s has a plan to win the impeachment fight:

Third, an impeachment inquiry should be plotted out more like a TV show than a trial. The star witnesses and high-profile hearings should be spaced out and timed for maximum impact. They should tell a story about Trump’s misdeeds. There should be no rush to get this over with quickly or to meet some artificial timeline. The audience for this show is not the Senate. It’s not Twitter and it’s not the panel on Morning Joe. The audience is the American people — specifically the new and sporadic Democratic voters who came out in 2018, or the independents and Republicans who say they’re most concerned about Trump’s conduct. Our job is to persuade them, not the DC pundit class.

Smart strategy to “prosecute a devastating case against Trump that increases the likelihood that Democrats win the White House, expand our House Majority, and take the Senate.” Each day brings us closer to impeachment proceedings, regardless of what Democratic leadership might want. The over-the-top malfeasance and criminality of this president and his White House leads inexorably to impeachment proceedings. It’s past time for the Democrats to accept that fact and make a plan to maximize its effectiveness.

Ban the Nazis, Twitter

Joseph Cox and Jason Koebler, in a Motherboard article provoked by a report that a Twitter employee explained that an automated banning of white supremacy would also ban Republican politicians:

Twitter has not publicly explained why it has been able to so successfully eradicate ISIS while it continues to struggle with white nationalism. As a company, Twitter won’t say that it can’t treat white supremacy in the same way as it treated ISIS. But external experts Motherboard spoke to said that the measures taken against ISIS were so extreme that, if applied to white supremacy, there would certainly be backlash, because algorithms would obviously flag content that has been tweeted by prominent Republicans—or, at the very least, their supporters. So it’s no surprise, then, that employees at the company have realized that as well.

Here’s an idea: ban white supremacists. Then when white supremacists complain, no matter what their office or political affiliation, cite the tweets at issue and explain how they violate the rules and qualify has hate speech. If they scream “free speech!”, invite them to find another platform on which to express their hate.

In other words, do the right thing, and have a fucking backbone.

Time is Short, So Be Generous

Supermassive Black Hole

Image by ESO/R.Genzel and S.Gillessen

This video, Timelapse of the Future, has kept me thinking ever since Kottke posted it a few weeks ago. Given current knowledge, the expectation is that the universe will go on forever, but thanks to entropy and expansion, it will eventually be full of, well, nothing at all. This rather limits the time hospitable to life. This arresting quotation from Brian Cox starting at the 12:55 mark captures it:

As a fraction of the lifespan of the universe, as measured from its beginning to the evaporation of the last black hole, life, as we know it, is only possible for one thousandth of a billion billon billonth, billion billon billonth, billion billon billonth of a percent.

That’s:

.000000000000000000000000000000000000000000000000000000000000000000000000000000000001%

Boy howdy our time is limited. We should make the best of it, to let our brief time be as pleasant, happy, and fulfilling as possible. All of us. Be kind, empathetic, compassionate, and generous with your fellow human beings. In the end, only how well we treat each other matters.

(Via kottke.org)

Humane Agile

This Jan Wischweh piece surveying the recent literature on the the so-called “agile crisis” is a bit of a slog, but these bits caught my attention:

One striking symptom of the Agile Crisis is the impositions of Agile on teams, which seems to be a common practice today. If Agile is so great and really gives more power and autonomy to the developers, why is it commonly imposed by upper management?

And:

Trust is the basis for any good communication. But Trust cannot be demanded. It needs to be earned. This Problem is highly related to Agile as trust is essential for any Agile team. But it can never be imposed.

And the issue of trust cannot be addressed without looking at the problem of power. Agile, especially Scrum, is more about efficiency than about empowering developers and it is not a shift away from Taylorism. On closer inspection, this will be visible in every single conflict within companies trying to transform towards Agile. Quite the opposite is true: it makes people more replaceable and controllable and is a modern and competitive form of Management.

Indeed, management’s focus on process and reproducibility (as in Taylorism) often drives the adoption of agile development processes. But truly autonomous agile teams must be empowered to make their own decisions. That means inviting them to adopt agile practices, rather than imposing those practices on them, and it means trusting teams to make decisions.

In other words, unilaterally determining team composition, deciding that they’ll do “agile” or “scrum” or “kanban”, and reserving the power to override their decisions perpetuates a traditional focus on repetitive tasks and control, rather than autonomy and craft. It demonstrates a lack of trust in the team, and without that trust, the team won’t trust management, either — an untenable, potentially catastrophic situation. No wonder “Agile” fails so often that we now have an “agile crisis”.

I keep coming back to the fundamental idea that teams are made out of people, and management should always support, promote, and empower the people in the company with the autonomy to excel and to do their best work. People over process.

Compassionate Sacking

Jennifer Kim, in a Medium post based on her Twitter thread:

#1 rule: No one should ever be surprised with a “you’re fired.” That’s how you create disgruntled employees, embarrassing Glassdoor reviews, dip in team morale, etc. An out-of-the-blue firing is a failing on the manager’s part, not the employees.

So how do you do that? The most important bit:

  1. Give them a fair shot to improve. As a leader, it’s your job to try to make it work, each employee is owed that.

Practice listening skills. Demonstrate that you believe in them, and you want to see them improve. Commit to giving a LOT more feedback (specific & documented).

If you have little faith that the employee will be able to improve, taking these and the other steps Jennifer recommends might feel like a waste of time. But unless the employee’s actions involve violence, harassment, fraud, etc., you need to give them every chance possible for not only their benefit, but the benefit of their coworkers. Of course you don’t mention it to your other employees, but people talk, they know what’s going on, and they all need to know that if they step out of line, you’ll support them as much as you can.

In other words, a firing should never come as a surprise to either the employee getting the sack nor their coworkers. Because worse than negative Glassdoor reviews is the erosion of trust among the people you continue to work with after the event.

Founding Fodder

We the People

Photo by Anthony Garand on Unsplash

Tone is set from the top, they say. I once started a company and ran it for 10 years, but I rarely thought about leadership, let alone setting the tone. It mattered little, since I was the sole employee for most of that time. Now I ponder these topics a lot, as I watch leaders fail to consciously create and promulgate an ethical organizational culture. I don’t mean they’re unethical, though some might be. I mean they, like me, never gave it much thought, or recognized its importance.

This myopia degrades long-term prospects, leaving employees and partners to invent and impose their own interpretations of the organization’s nature, motives, and goals. Without a clearly-defined direction, people make their own way, and despite the best intentions, those ideas surely don’t quite align with the underpinning ideas and expectations of leadership.

Constituted Outline

Next time I find myself in the position to shape an organization — found a company, create a new group, organize a team — I will give careful thought to these issues, and formalize them in foundational documents that provide focus and direction for the duration. A sort of Organizational Constitution. And like any constitution, its articles will both set the tone and encode the rules.

Preamble: Culture

Culture establishes an environment in which members of the organization feel cared about, respected, valued, and physically and psychologically safe; where they understand what they’re a part of and fulfilled by their roles. Culture recognizes what people can contribute, and finds ways to let them do so. It lets them know there’s a place for them, and that they actively contribute to the Mission.

Culture cannot be legislated, of course, but a preamble announces intentions, sets the tone, and establishes the foundation on which the rest follows.

Article 1. Values

A clear articulation of the organization’s Values — its principals and beliefs. These comprise both internal-facing expectations for members as well as global values and beliefs defining the organization’s place and role in the world. Ideally, they’re the same. Such values inform expectations for partners, investors, customers, and users. Leadership must demonstrate these values in their everyday work, and always be mindful of them when making decisions. Examples of values meaningful to me include:

  • Humaneness
  • Empathy
  • Privacy
  • Security
  • Diversity & Inclusivity
  • Respect
  • Empowering the disempowered
  • Making the world a better place
  • Advancing social justice
  • Doing the right thing
  • Making people happy/​​productive/​empowered/​independent/delighted
  • Innovation
  • Integrity
  • Quality
  • Teamwork
  • Accountability
  • Responsibility
  • Passion
  • Sustainability
  • Community
  • Courage
  • Focus
  • Excellence
  • Collaboration

Article 2. Vision

The Vision lays out how the organization wants to make its dent in the universe. It focuses on the future, and what the organization ultimately seeks to become. It should align closely with the Values, bringing them to bear to define the organization’s purpose, and describe the long-term, measurable goal. The Vision answers questions such as:

  • What are our hopes and dreams?
  • What problem are we solving for the greater good?
  • Who and what are we inspiring to change?

Article 3. Mission

The Mission focuses on the now, and defines how the organization goes about achieving its Vision. It must never contradict the Vision or Values; indeed, they shape the Mission. It’s the core of the business, and from the Mission come Strategy and Execution. A mission statement embodies the Mission by answering questions such as:

  • What do we do?
  • Whom do we serve?
  • How do we serve them?

Article 4. Brand

Closely aligned with Values, the Brand defines the organization. The brand commits to the Values, Vision, and Mission, recognized both internally and externally, so that anyone can say what the organization stands for and how it goes about achieving its goals. Decisions that might erode the Brand or violate its underpinning Values must be avoided.

Article 5. Strategy

The Mission is the “what”; the Strategy is the “how”. The Strategy describes how the organization intends to execute on its Mission to achieve its Vision. It should be high-level but practical, goal-focused but not methodologically imperative. It defines objectives that clearly demonstrate value for existing and prospective constituents (customers, users, etc.) while adhering to — and never corroding — the organization’s Values and Vision.

Article 6. Execution

Everyone in the organization should be aware of what the Strategy is, what its objectives are, and how it furthers the Mission while adhering to its Values. Recognition of and continual reinforcement of the Strategy and objectives creates focus, providing a guide for decision-making. Ultimately, Execution means delivery. It requires meaningful goals to fulfill the Strategy and the achievement of its objectives: shipping product, meeting deadlines, effectively promoting products and solutions, and acquiring happy constituents who enjoy the fruits of the organizations, who derive benefit and value from them.

Article 7. Structure

The organization Structure must enable it to effectively execute the Strategy. That means cohesive teams with with clear mandates and the focus and autonomy to effectively execute. Strong coupling of deliverables across teams ought to be minimized, but expert consultation should be provided where needed. Everyone in the organization should be aware of the Structure, and understand their roles and the roles of other teams.

Article 8. Communication

Leadership must be aware of all of the above tenets and invoke them them regularly. Speak every day about what the organization believes in (Values), what it wants to see in the world (Vision, Mission), and how it contributes to making that world (Strategy, Execution). Communicate consistently and constantly within the context of the products made and services provided — toward the output of the Strategy, the organization’s deliverables. Demonstration of the alignment of the Strategy to the Values of the organization must be continuous, and always consulted when making decisions.

This Communication must be verbal, but also written. Guiding documents must outline all of these aspects, and tie all the pieces together. So in addition to the constitutional articles that define the Values, Vision, Mission, there must be living documents that articulate the Strategy for achieving the Vision and Mission. These includes road maps, planning documents, specifications, promotional plans and materials, organizational structure, team and role definition, etc.

Pursuit of Happiness

Inconsistency of these articles abounds in the business world, since companies seldom convene a constitutional convention to create them — but sometimes because internal- and external-facing messaging varies. It need not be the case.

Perhaps working through these topics with a team will help constitute the grounds on which the organization functions and presents itself to its members and the world. Even if some members disagree with or are indifferent to some of its tenets, all will appreciate the clarity and focus they engender. And an organization with purpose gives its members purpose, meaning to their work, and satisfaction in the furthering of the mission.

Borderline

In just about any discussion of GDPR compliance, two proposals always come up: disk encryption and network perimeter protection. I recently criticized the focus on disk encryption, particularly its inability to protect sensitive data from live system exploits. Next I wanted to highlight the deficiencies of perimeter protection, but in doing a little research, I found that Goran Begic has already made the case:

Many organizations, especially older or legacy enterprises, are struggling to adapt systems, behaviors, and security protocols to this new-ish and ever evolving network model. Outdated beliefs about the true nature of the network and the source of threats put many organizations, their information assets, and their customers, partners, and stakeholders at risk.

What used to be carefully monitored, limited communication channels have expanded into an ever changing system of devices and applications. These assets are necessary for your organization to do business—they are what allow you to communicate, exchange data, and make business decisions and are the vehicle with which your organization runs the business and delivers value to its clients.

Cloud computing and storage, remote workers, and the emerging preference for micro-services over monoliths1 vastly complicate network designs and erode boundaries. Uber-services such as Kubernetes recover some control by wrapping all those micro-services in the warm embrace of a monolithic orchestration layer, but by no means restore the simplicity of earlier times. Once the business requires the distribution of data and services to multiple data centers or geographies, the complexity claws its way back. Host your data and services in the cloud and you’ll find the boundary all but gone. Where’s the data? It’s everywhere.

In such an environment, staying on top of all the vulnerabilities — all the patches, all the services listening on this network or that, inside some firewall or out, accessed by whom and via what means — becomes exponentially more difficult. Even the most dedicated, careful, and meticulous of teams sooner or later overlook something. An unpatched vulnerability. An authentication bug in an internal service. A rogue cloud storage container to which an employee uploads unencrypted data. Any and all could happen. They do happen. Strive for the best; expect the worst.

Because it’s not a matter of whether or not your data will be breached. It’s simply a matter of when.

Unfortunately, compliance discussions often end with these two mitigations, disk encryption and network perimeter protection. You should absolutely adopt them, and a discussion rightfully starts with them. But then it’s not over. No, these two basics of data protection are but the first step to protect sensitive data and to satisfy the responsibility for security of processing (GDPR Article 32). Because sooner or later, no matter how comprehensive the data storage encryption and firewalling, eventually there will be a breach. And then what?

“What next” bears thinking about: How do you further reduce risk in the inevitable event of a breach? I suggest taking the provisions of the GDPR at face value, and consider three things:

  1. Privacy by design and default
  2. Anonymization and aggregation
  3. Pseudonymization

Formally, items two and three fall under item 1, but I would summarize them as:

  1. Collect only the minimum data needed for the job at hand
  2. Anonymize and aggregate sensitive data to minimize its sensitivity
  3. Pseudonymize remaining sensitive data to eliminate its breach value

Put these three together, and the risk of sensitive data loss and the costs of mitigation decline dramatically. In short, take security seriously, yes, but also take privacy seriously.


  1. It’s okay, as a former archaeologist I’m allowed to let the metaphor stand on its own. ↩︎

Criminals, Not Spies

Masha Gessen, in a piece for The New Yorker:

We cringed at the characterization of the Russian online influence campaign as “sophisticated” and “vast”: Russian reporting on the matter—the best available — convincingly portrayed the troll operation as small-time and ridiculous. It was, it seems, fraudulent in every way imaginable: it perpetrated fraud on American social networks, creating fake accounts and events and spreading falsehoods, but it was also fraudulent in its relationship to whoever was funding it, because surely crudely designed pictures depicting Hillary Clinton as Satan could not deliver anyone’s money’s worth.

I think this is exactly right. So much of the coverage depicts the Russian hacking as “vast” and “sophisticated”. As a technologist working in information security, I find this framing irresponsible and naïve at best — complicit at worst. (Sadly, even the former director of the CIA uses this framing.) The techniques are those used for fraud, extortion, blackmail, and the like. They effectively advance a criminal conspiracy because they’re simple; they exploit human vulnerabilities. A far cry from clandestine government surveillance or espionage, the point is disinformation for the benefit of a very few. Painting it as “massive” or “advanced” only increases its effectiveness.

That’s just one aspect of the problematic coverage. Gessen also brings a sociological perspective to bear: The Russian government and its cohort more closely approximates a “Mafia state” than a dictatorship. A press that understands the difference will cover these people not as heads of state, but as criminals who happen to control states. I hope some, at least, take it to heart.

(Via Lauren Bacon)

Flex Your BICEPS

I’ve been thinking a lot about what creative professionals want and expect out of their jobs. We require certain base features of a job, the absolute minimum for even considering employment:

  • Fair, livable compensation for the work
  • Comprehensive, low maintenance, effective benefits (especially health care)
  • Equitable work hours and conditions (vacation time, work/life balance)
  • Safe work environment

Employers attempting to skimp on any of these items devalue the people they employ and the work they do. Don’t do that.

Assuming an organization meets these fundamentals, what else gets people excited to go to work? What makes employees happy, committed, and productive members of the team? Fortunately, I’m far from the first to explore this topic. Paloma Medina reduces the literature to the muscular acronym BICEPS:

There are six core needs researchers find are most important for humans at work. Not all are equally important to everyone. You might find that equity and belonging are most important to you, but choice and status are most important to your employee. Getting to know them and coaching to them is a shortcut to making others feel understood and valued (aka inclusivity).

The BICEPS core needs:

  1. Belonging
  2. Improvement/Progress
  3. Choice
  4. Equality/Fairness
  5. Predictability
  6. Significance

Beyond the utility of having these needs enumerated to think about collectively — with obvious implications — I find it useful to examine them from varying frames of references. To that end, consider each from the perspective not of rewards and perks, certificates and foosball tables. Ponder them with the goal of creating a virtuous cycle, where the work improves the company, engendering greater satisfaction in the work, and encouraging more of the same.

Belonging

Organizations serious about encouraging friendships and closeness often highlight social gatherings, team-building exercises, and outings. But don’t underestimate the motivation of the work. Small teams given the space to collaborate and accomplish their goals might be the best structure to create a sense of belonging to a tight-knit group — and for employees to find joy in their accomplishments.

Then reward those accomplishments. Not just with compensation or perks. No. Put the full force of the business behind them. If a team finished work on a feature or shipped a product, don’t limit recognition to a cocktail hour and a raised toast. Promote the hell out of it through all available channels: marketing, sales, blogging, support, community forums, whatever. The surest road to satisfaction and a sense of belonging is to turn that work into a palpable success for the organization.

Improvement/Progress

Funds for conferences, training, and formal education clearly help employees make progress in their careers, or to sipmly improve themselves. But people also get satisfaction from work that helps the company to execute its strategies and meet its goals. Assuming the vision aligns with an employee’s values,1 contributing to the material achievement of that vision becomes the employee’s achievement, too.

So be sure to create opportunities for all employees to grow, both in their careers and contributions to the company mission. Avoid artificial divides between those who make the execute and those who support them. Not everyone will participate; still, encourage ideas and suggestions from all quarters and, where possible, adopt them. Beyond the old canard to “act like an owner”, clearly link organizational success to the ideas and work that created it, and give everyone the chance to make a difference. They improve as the business improves, and that’s progress.

Choice

Typically, “choice” means different healthcare plans, Mac or PC, sitting or standing desk. Such perks are nice, but not materially meaningful.2 The choices that warm the creative worker’s heart have much more to do with autonomy and decision-making than fringe benefits. Let teams choose their projects, decide on technologies, self-organize, make the plans to execute. People empowered to take initiative and make decisions without micromanagement or post-hoc undermining find motivation and reward in the work itself. Let them do it!

Equality/Fairness

Yes, grant employees equal access to resources, to management, to the decision-making process, and any other information necessary for their work, benefits, etc. That only stands to reason. But give them equal access to interesting work, too. Where possible, avoid unilaterally appointing people to teams or projects: let them organically organize and pick their collaborators and projects. Such decisions mustn’t be made in isolation; it wouldn’t be fair. Rather, you’ll need to hold regular get-togethers of all relevant teams to make such decisions collectively, PI Planning-style. Give everyone a voice, leave no one out, and they will mostly work out the optimal distribution of tasks.

Predictability

In addition to paying employees on time, every two weeks, make the work cycle predictable, too. Everyone should have a good idea when things happen, what the iteration cycle looks like, what the steps are and when they get slotted into the schedule, when projects complete and products ship. Just as importantly, make it clear what will they be working on next – or at least what’s in the pipeline for the teams to choose and plan for in the next iteration of the development process. A predictable cadence for the work lets people understand where they are at any given time, what’s next, and what successful execution looks like.

Significance

Titles and industry recognition, obviously, but this item brings my commentary full circle. Make sure that the work employees do gets seen not only by immediate managers, not simply lauded at the weekly dessert social. Make it a part of the success of the company. Promote the hell out of it, let customers and users know that it exists and solves their problems — no, show them — and shout it from the rooftops so the entire world know about all the stuff made by your super valuable team of humans.

They’ll be happier, more satisfied, and ready to make the next success.


  1. A very big assumption indeed. I expect to write a bit about company strategies and alignment to employee values soon. ↩︎

  2. Okay, sometimes a choice is no choice at all. Mac or nothing for me! ↩︎

The Problem With Disk Encryption

Full disk encryption provides incredible data protection for personal devices. If you haven’t enabled FileVault on your Mac, Windows Device Encryption on your PC, or Android Device Encryption on your phone, please go do it now (iOS encrypts storage by default). It’s easy, efficient, and secure. You will likely never notice the difference in usage or performance. Seriously. This is a no-brainer.

Once enabled, device encryption prevents just about anyone from accessing device data. Unless a malefactor possesses both device and authentication credentials, the data is secure.

Mostly.

Periodically, vulnerabilities arise that allow circumvention of device encryption, usually by exploiting a bug in a background service. OS vendors tend to fix such issues quickly, so keep your system up to date. And if you work in IT, enable full disk encryption on all of your users’ devices and drives. Doing so greatly reduces the risk of sensitive data exposure via lost or stolen personal devices.

Servers, however, are another matter.

The point of disk encryption is to prevent data compromise by entities with physical access to a device. If a governmental or criminal organization takes possession encrypted storage devices, gaining access to an of the data presents an immense challenge. Their best bet is to power up the devices and scan their ports for potentially-vulnerable services to exploit. The OS allows such services transparent access the file system via automatic decryption. Exploiting such a service allows access to any data the service can access.

But, law enforcement investigations aside,1 who bothers with physical possession? Organizations increasingly rely on cloud providers with data distributed across multiple servers, perhaps hundreds or thousands, rendering the idea of physical confiscation nearly meaningless. Besides, when exfiltration typically relies on service vulnerabilities, why bother taking possession hardware at all? Just exploit vulnerabilities remotely and leave the hardware alone.

Which brings me to the issue of compliance. I often hear IT professionals assert that simply encrypting all data at rest2 satisfies the responsibility to the security of processing (GDPR Article 32). This interpretation may be legally correct3 and relatively straight-forward to achieve: simply enable [disk encryption], protect the keys via an appropriate and closely-monitored key management system, and migrate data to the encrypted file systems.4

This level of protection against physical access is absolutely necessary for protecting sensitive data.

Necessary, but not sufficient.

When was the last time a breach stemmed from physical access to a server? Sure, some reports in the list of data breaches identify “lost/stolen media” as the beach method. But we’re talking lost (and unencrypted) laptops and drives. Hacks (service vulnerability exploits), accidental publishing,5 and “poor security” account for the vast majority of breaches. Encryption of server data at rest addresses none of these issues.

By all means, encrypt data at rest, and for the love of Pete please keep your systems and services up-to-date with the latest patches. Taking these steps, along with full network encryption, is essential for protecting sensitive data. But don’t assume that such steps adequately protect sensitive data, or that doing so will achieve compliance with GDPR Article 32.

Don’t simply encrypt your disks or databases, declare victory, and go home.

Bear in mind that data protection comes in layers, and those layers correspond to levels of exploitable vulnerability. Simply addressing the lowest-level requirements at the data layer does nothing to prevent exposure at higher levels. Start disk encryption, but then think through how best to protect data at the application layer, the API layer, and, yes, the human layer, too.


  1. Presumably, a legitimate law enforcement investigation will compel a target to provide the necessary credentials to allow access by legal means, such as a court order, without needing to exploit the system. Such an investigation might confiscate systems to prevent a suspect from deleting or modifying data until such access can be compelled — or, if such access is impossible (e.g., the suspect is unknown, deceased, or incapacitated) — until the data can be forensically extracted. ↩︎

  2. Yes, and in transit. ↩︎

  3. Although currently no precedent-setting case law exists. Falling back on PCI standards may drive this interpretation. ↩︎

  4. Or databases. The fundamentals are the same: encrypted data at rest with transparent access provided to services. ↩︎

  5. I plan to write about accidental exposure of data in a future post. ↩︎

Testing Perl Projects on Travis Windows

A few months ago, Travis CI announced early access for a Windows build environment. In the last couple weeks, I spent some time to figure out how to test Perl projects there by installing Strawberry Perl from Chocolatey.

The result is the the sample project winperl-travis. It demonstrates three .travis.yml configurations to test Perl projects on Windows:

  1. Use Windows instead of Linux to test multiple versions of Perl. This is the simplest configuration, but useful only for projects that never expect to run on a Unix-style OS.
  2. Add a Windows build stage that runs the tests against the latest version of Strawberry Perl. This pattern is ideal for projects that already test against multiple versions of Perl on Linux, and just want to make sure things work on windows.
  3. Add a build stage that tests against multiple versions of Strawberry Perl in separate jobs.

See the results of each of the three approaches in the CI build. A peek:

winperl-travis CI build results

The Travis CI-default “Test” stage is the default, and runs tests on two versions of Perl on Windows. The “Windows” stage tests on a single version of Windows Perl, independent of the “Test” stage. And the “Strawberry” stage tests on multiple versions of Windows Perl independent of the “Test” stage.

If, like me, you just want to validate that your Perl project builds and its tests pass on Windows (option 2), I adopted the formula in text-markup project. The complete .travis.yml:

language: perl
perl:
  - "5.28"
  - "5.26"
  - "5.24"
  - "5.22"
  - "5.20"
  - "5.18"
  - "5.16"
  - "5.14"
  - "5.12"
  - "5.10"
  - "5.8"

before_install:
  - sudo pip install docutils
  - sudo apt-get install asciidoc
  - eval $(curl https://travis-perl.github.io/init) --auto

jobs:
  include:
    - stage: Windows
      os: windows
      language: shell
      before_install:
        - cinst -y strawberryperl
        - export "PATH=/c/Strawberry/perl/site/bin:/c/Strawberry/perl/bin:/c/Strawberry/c/bin:$PATH"
      install:
        - cpanm --notest --installdeps .
      script:
        - cpanm -v --test-only .

The files starts with the typical Travis Perl configuration: select the language (Perl) and the versions to test. The before_install block installs a couple of dependencies and executes the travis-perl helper for more flexible Perl testing. This pattern practically serves as boilerplate for new Perl projects.

The new bit is the jobs.include section, which declares a new build stage named “Windows”. This stage runs independent of the default phase, which runs on Linux, and declares os: windows to run on Windows.

The before_install step uses the pre-installed Chocolatey package manager to install the latest version of Strawberry Perl and update the $PATH environment variable to include the paths to Perl and build tools. Note that the Travis CI Window environment runs inside the Git Bash shell environment; hence the Unix-style path configuration.

The install phase installs all dependencies for the project via cpanminus, then the script phase runs the tests, again using cpanminus.

And with the stage set, the text-markup build has a nice new stage that ensures all tests pass on Windows.

The use of cpanminus, which ships with Strawberry Perl, keeps things simple, and is essential for installing dependencies. But projects can also perform the usual gmake test1 or perl Build.PL && ./Build test dance. Install Dist::Zilla via cpanminus to manage dzil-based projects. Sadly, prove currently does not work under Git Bash.2

Perhaps Travis will add full Perl support and things will become even easier. In the meantime, I’m pleased that I no longer have to guess about Windows compatibility. The new Travis Windows environment enables a welcome increase in cross-platform confidence.


  1. Although versions of Strawberry Perl prior to 5.26 have trouble installing Makefile.PL-based modules, including dependencies. I spent a fair bit of time trying to work out how to make it work, but ran out of steam. See issue #1 for details. ↩︎

  2. I worked around this issue for Sqitch by simply adding a copy of prove to the repository. ↩︎