Just a Theory

By David E. Wheeler

Posts about Security

Borderline

In just about any discussion of GDPR compliance, two proposals always come up: disk encryption and network perimeter protection. I recently criticized the focus on disk encryption, particularly its inability to protect sensitive data from live system exploits. Next I wanted to highlight the deficiencies of perimeter protection, but in doing a little research, I found that Goran Begic has already made the case:

Many organizations, especially older or legacy enterprises, are struggling to adapt systems, behaviors, and security protocols to this new-ish and ever evolving network model. Outdated beliefs about the true nature of the network and the source of threats put many organizations, their information assets, and their customers, partners, and stakeholders at risk.

What used to be carefully monitored, limited communication channels have expanded into an ever changing system of devices and applications. These assets are necessary for your organization to do business—they are what allow you to communicate, exchange data, and make business decisions and are the vehicle with which your organization runs the business and delivers value to its clients.

Cloud computing and storage, remote workers, and the emerging preference for micro-services over monoliths1 vastly complicate network designs and erode boundaries. Uber-services such as Kubernetes recover some control by wrapping all those micro-services in the warm embrace of a monolithic orchestration layer, but by no means restore the simplicity of earlier times. Once the business requires the distribution of data and services to multiple data centers or geographies, the complexity claws its way back. Host your data and services in the cloud and you’ll find the boundary all but gone. Where’s the data? It’s everywhere.

In such an environment, staying on top of all the vulnerabilities — all the patches, all the services listening on this network or that, inside some firewall or out, accessed by whom and via what means — becomes exponentially more difficult. Even the most dedicated, careful, and meticulous of teams sooner or later overlook something. An unpatched vulnerability. An authentication bug in an internal service. A rogue cloud storage container to which an employee uploads unencrypted data. Any and all could happen. They do happen. Strive for the best; expect the worst.

Because it’s not a matter of whether or not your data will be breached. It’s simply a matter of when.

Unfortunately, compliance discussions often end with these two mitigations, disk encryption and network perimeter protection. You should absolutely adopt them, and a discussion rightfully starts with them. But then it’s not over. No, these two basics of data protection are but the first step to protect sensitive data and to satisfy the responsibility for security of processing (GDPR Article 32). Because sooner or later, no matter how comprehensive the data storage encryption and firewalling, eventually there will be a breach. And then what?

“What next” bears thinking about: How do you further reduce risk in the inevitable event of a breach? I suggest taking the provisions of the GDPR at face value, and consider three things:

  1. Privacy by design and default
  2. Anonymization and aggregation
  3. Pseudonymization

Formally, items two and three fall under item 1, but I would summarize them as:

  1. Collect only the minimum data needed for the job at hand
  2. Anonymize and aggregate sensitive data to minimize its sensitivity
  3. Pseudonymize remaining sensitive data to eliminate its breach value

Put these three together, and the risk of sensitive data loss and the costs of mitigation decline dramatically. In short, take security seriously, yes, but also take privacy seriously.


  1. It’s okay, as a former archaeologist I’m allowed to let the metaphor stand on its own.

The Problem With Disk Encryption

Full disk encryption provides incredible data protection for personal devices. If you haven’t enabled FileVault on your Mac, Windows Device Encryption on your PC, or Android Device Encryption on your phone, please go do it now (iOS encrypts storage by default). It’s easy, efficient, and secure. You will likely never notice the difference in usage or performance. Seriously. This is a no-brainer.

Once enabled, device encryption prevents just about anyone from accessing device data. Unless a malefactor possesses both device and authentication credentials, the data is secure.

Mostly.

Periodically, vulnerabilities arise that allow circumvention of device encryption, usually by exploiting a bug in a background service. OS vendors tend to fix such issues quickly, so keep your system up to date. And if you work in IT, enable full disk encryption on all of your users’ devices and drives. Doing so greatly reduces the risk of sensitive data exposure via lost or stolen personal devices.

Servers, however, are another matter.

The point of disk encryption is to prevent data compromise by entities with physical access to a device. If a governmental or criminal organization takes possession encrypted storage devices, gaining access to an of the data presents an immense challenge. Their best bet is to power up the devices and scan their ports for potentially-vulnerable services to exploit. The OS allows such services transparent access the file system via automatic decryption. Exploiting such a service allows access to any data the service can access.

But, law enforcement investigations aside,1 who bothers with physical possession? Organizations increasingly rely on cloud providers with data distributed across multiple servers, perhaps hundreds or thousands, rendering the idea of physical confiscation nearly meaningless. Besides, when exfiltration typically relies on service vulnerabilities, why bother taking possession hardware at all? Just exploit vulnerabilities remotely and leave the hardware alone.

Which brings me to the issue of compliance. I often hear IT professionals assert that simply encrypting all data at rest2 satisfies the responsibility to the security of processing (GDPR Article 32). This interpretation may be legally correct3 and relatively straight-forward to achieve: simply enable [disk encryption], protect the keys via an appropriate and closely-monitored key management system, and migrate data to the encrypted file systems.4

This level of protection against physical access is absolutely necessary for protecting sensitive data.

Necessary, but not sufficient.

When was the last time a breach stemmed from physical access to a server? Sure, some reports in the list of data breaches identify “lost/stolen media” as the beach method. But we’re talking lost (and unencrypted) laptops and drives. Hacks (service vulnerability exploits), accidental publishing,5 and “poor security” account for the vast majority of breaches. Encryption of server data at rest addresses none of these issues.

By all means, encrypt data at rest, and for the love of Pete please keep your systems and services up-to-date with the latest patches. Taking these steps, along with full network encryption, is essential for protecting sensitive data. But don’t assume that such steps adequately protect sensitive data, or that doing so will achieve compliance with GDPR Article 32.

Don’t simply encrypt your disks or databases, declare victory, and go home.

Bear in mind that data protection comes in layers, and those layers correspond to levels of exploitable vulnerability. Simply addressing the lowest-level requirements at the data layer does nothing to prevent exposure at higher levels. Start disk encryption, but then think through how best to protect data at the application layer, the API layer, and, yes, the human layer, too.


  1. Presumably, a legitimate law enforcement investigation will compel a target to provide the necessary credentials to allow access by legal means, such as a court order, without needing to exploit the system. Such an investigation might confiscate systems to prevent a suspect from deleting or modifying data until such access can be compelled — or, if such access is impossible (e.g., the suspect is unknown, deceased, or incapacitated) — until the data can be forensically extracted.

  2. Yes, and in transit.
  3. Although currently no precedent-setting case law exists. Falling back on PCI standards may drive this interpretation.

  4. Or databases. The fundamentals are the same: encrypted data at rest with transparent access provided to services.

  5. I plan to write about accidental exposure of data in a future post.

Facebook Identity Theft

I get email:

Action Required: Confirm Your Facebook Account

Needless to say, I did not just register for Facebook.

Hrm. That’s weird, since my Facebook account dates back to 2007. Wait, there’s another email:

(219) 798-8705 added to your Facebook account

That’s not my phone number.

I’ve never seen that phone number before in my life. In fact, I removed my phone number from Facebook not long ago for privacy reasons. So what’s going on?

A quick look at the email address tells the story: It’s my Gmail address. Which I never use. Since I never use it, it’s not associated with any account, including Facebook. What’s happened is someone created a new Facebook account with my Gmail address. If I were to click the “Confirm your account” button, I would give someone else a valid Facebook account using my identity. It’d be even worse if I also approved the phone number. Doing so would cede complete control over this Facebook account to someone else. These kinds of messages are so common that it wouldn’t surprise me if some people just clicked those links and entered the confirmation code.

It’s only Facebook, you might think. But Facebook, isn’t “only” anything anymore. It’s a juggernaut. Facebook is so massive, and has promoted itself so heavily as an identity platform, that many organizations rely on it for identity proofing vias social logins. That means someone can “prove” they’re me by logging into that Facebook account. Via that foothold, they can gradually control other online accounts and effectively control the identity associated with my Gmail address.

That would not be good.

So after inspecting the email to make sure that its URLs are all actually on facebook.com, I visit the “please secure your account” link:

Secure your account?

This isn’t right…

This is a little worrying. It’s not that I think someone else is logging into my account. It’s that someone else has created an account using my Gmail address, and therefore a slice of my identity. Still, locking it down seems like a good idea. I hit the “Secure Account” button.

Secure your account?

What? Fuck no.

Now we’ve reached to the point point where I’m at risk of actually associating my physical photo ID with an account someone else created and controls? Fuck no. I don’t want to associate a photo ID with my real Facebook account, let alone one set up by some rando cybercriminal. Neither should you.

I close that browser tab, switch to another browser, and log into my real Facebook account. If the problem is that someone else wants proof of control over my Gmail address, I have to take it back. So I add my Gmail address to the settings for my real Facebook account, wait for the confirmation email, and hit the confirmation link.

Contact Email Confirmation

That should do it.

Great, that other account no longer has any control over my Gmail address. Hope it doesn’t have any other email addresses associated with it.

Oh, one more step: Facebook decided this new address should be my primary email address, so I had to change it back.

I don’t know how people without Facebook accounts would deal with this situation. Facebook needs to give people a way to say: “This is not me, this is not my account, I don’t want an account, please delete this bogus account.” It shouldn’t require uploading a photo ID, either.

Token Dimensions

C’est mois, in my second post on Tokenization for the iovation blog:

These requirements demonstrate the two dimensions of tokenization: reversibility and determinism. Reversible tokens may be detokenized to recover their original values. Deterministic tokens are always the same given the same inputs.

The point is to evaluate the fields private data fields to be tokenized in order to determine where in along these dimensions they fall, so that one can make informed choices when evaluating tokenization products and services.

iovation Tokenization

C’est mois, in the first of a series for the iovation blog:

Given our commitment to responsible data stewardship, as well as the invalidation of Safe Harbor and the advent of the GDPR, we saw an opportunity to reduce these modest but very real risks without impacting the efficacy of our services. A number of methodologies for data protection exist, including encryption, strict access control, and tokenization. We undertook the daunting task to determine which approaches best address data privacy compliance requirements and work best to protect customers and users — without unacceptable impact on service performance, cost to maintain infrastructure, or loss of product usability.

The post covers encryption, access control, and tokenization.

A Porous “Privacy Shield”

Glyn Moody, in Ars Technica, on the proposed replacement for the recently struck-down Safe Harbor framework:

However, with what seems like extraordinarily bad timing, President Obama has just made winning the trust of EU citizens even harder. As Ars reported last week, the Obama administration is close to allowing the NSA to share more of the private communications it intercepts with other federal agencies, including the FBI and the CIA, without removing identifying information first.

In other words, not only will the new Privacy Shield allow the NSA to continue to scoop up huge quantities of personal data from EU citizens, it may soon be allowed to share them widely. That’s unlikely to go down well with Europeans, the Article 29 Working Party, or the CJEU—all of which ironically increases the likelihood that the new Privacy Shield will suffer the same fate as the Safe Harbour scheme it has been designed to replace.

So let me get this straight. Under this proposal:

  • The NSA can continue to bulk collect EU citizen data.
  • That data may be shared with other agencies in the US government.
  • Said collection must fall under six allowed case, one of which is undefined “counter-terrorism” purposes. No one ever abused that kind of thing before.
  • The US claims there is no more bulk surveillance, except that there is under those six cases.
  • The appointed “independent ombudsman” to address complaints by EU citizens will be a single US Undersecretary of State.
  • Complaints can also be addressed to US companies housing EU citizen data, even though, in the absence of another Snowden-scale whistle-blowing, they may have no idea their data is being surveiled.

Color me skeptical that this would work, let alone not be thrown out by another case similar to the one that killed Safe Harbor.

I have a better idea. How about eliminating mass surveillance?

Do We Have Right to Security?

Rich Mogull:

Don’t be distracted by the technical details. The model of phone, the method of encryption, the detailed description of the specific attack technique, and even the feasibility are all irrelevant.

Don’t be distracted by the legal wrangling. By the timing, the courts, or the laws in question. Nor by politicians, proposed legislation, Snowden, or speeches at think tanks or universities.

Don’t be distracted by who is involved. Apple, the FBI, dead terrorists, or common drug dealers.

Everything, all of it, boils down to a single question.

Do we have a right to security?

How about we introduce a bill guaranteeing a right to security. Senator Wyden?

(Via Daring Fireball)

Anthem Breach Harms Consumers

Paul Roberts in Digital Guardian:

Whether or not harm has occurred to plaintiffs is critical for courts to decide whether the plaintiff has a right – or “standing” – to sue in the first place. But proving that data exposed in a breach has actually been used for fraud is notoriously difficult.

In her decision in the Anthem case, [U.S. District Judge Lucy] Koh reasoned that the theft of personal identification information is harm to consumers in itself, regardless of whether any subsequent misuse of it can be proven. Allegations of a “concrete and imminent threat of future harm” are enough to establish an injury and standing in the early stages of a breach suit, she said.

Seems like a no-brainer to me. Personal information is just that: personal. Organizations that collect and store personal information must take every step they can to protect it. Failure to do so harms their users, exposing them to increased risk of identity theft, fraud, surveillance, and abuse. It’s reasonable to expect that firms not be insulated from litigation for failing to protect user data.

Apple Challenges FBI Decryption Demand

Incredible post from Apple, signed by Tim Cook:

The government is asking Apple to hack our own users and undermine decades of security advancements that protect our customers — including tens of millions of American citizens — from sophisticated hackers and cybercriminals. The same engineers who built strong encryption into the iPhone to protect our users would, ironically, be ordered to weaken those protections and make our users less safe.

We can find no precedent for an American company being forced to expose its customers to a greater risk of attack. For years, cryptologists and national security experts have been warning against weakening encryption. Doing so would hurt only the well-meaning and law-abiding citizens who rely on companies like Apple to protect their data. Criminals and bad actors will still encrypt, using tools that are readily available to them.

I only wish there was a place to co-sign. Companies must do all they can to safeguard the privacy of their users, preferably such only users can unlock and access their personal information. It’s in the interest of the government to ensure that private data remain private. Forcing Apple to crack its own encryption sets a dangerous precedent likely to be exploited by cybercriminals for decades to come. Shame on the FBI.

How Does One Protect Online Ballot Box Stuffing?

I need to set up an online voting system. It needs to be more robust than a simple polling system, in order, primarily, to prevent ballot box stuffing. Of course I realize that it’s impossible to prevent ballot box stuffing by a determined individual, but what I want to prevent is scripted attacks and denial of service attacks. The features I’ve come up with so far to prevent attacks are:

  • Require site registration. You must be a registered user of the site in order to vote in an election, and of course, you can vote only once.
  • Ignore votes when cookies are disabled, although make it look like a successful submission.
  • Update result statistics periodically, rather than after every vote. This will make it difficult for an exploiter to tell if his votes are being counted.
  • Use a CAPTCHA to prevent scripted voting.
  • Send a new digest hidden in every request that must be sent back and checked against a server-side session in order to prevent “curl” attacks.
  • Log IP addresses for all votes. These can be checked later if ballot box stuffing is suspected (though we’ll have to ignore it if many users are behind a proxy server).

Of course someone behind a well-known proxy server who wants to repeatedly create a new user account using different email addresses and deleting his cookies before every vote could do some ballot box stuffing, but I think that the above features will minimize the risk. But I’m sure I’m forgetting things. What other steps should I take?

Leave a comment to let me know.

Looking for the comments? Try the old layout.

Script Kitties

Julie was reading an article about Internet security in The New Yorker the other day, when she suddenly turned to me and said, “Oh! All this time when I heard you say ‘script kiddies’, what I heard was ‘k-i-t-t-i-e-s’!”

Fear the feline crackers.

Looking for the comments? Try the old layout.

Windows Virus Hell

So to finish up development and testing of Test.Harness.Browser in IE 6 last week, I rebooted my Linux server (the one running justatheory.com) into Windows 98, got everything working, and rebooted back into Linux. I felt that the hour or two’s worth of downtime for my site was worth it to get the new version of Test.Simple out, and although I had ordered a new Dell, didn’t want to wait for it. And it worked great; I’m very pleased with Test.Simple 0.20.

But then, in unrelated news, I released Bricolage 1.9.0, the first development release towards Bricolage 1.10, which I expect to ship next month. One of the things I’m most excited about in this release is the new PHP templating support. So on George Schlossnagle’s advice, I sent an email to webmaster@php.net. It bounced. It was late on Friday, and I’m so used to bounces being problems on the receiving end, that I simply forwarded it to George with the comment, “What the?” and went to fix dinner for company.

Then this morning I asked George, via IM, if he’d received my email. He hadn’t. I sent it again; no dice. So he asked me to paste the bounce, and as I did so, looked at it more carefully. It had this important tidbit that I’d failed to notice before:

140.211.166.39 failed after I sent the message.
Remote host said: 550-5.7.1 reject content [xbl]
550 See http://master.php.net/mail/why.php?why=SURBL

“That’s curious,” I thought, and went to read the page in question. It said I likely had a domain name in my email associated with a blacklisted IP address. Well, there were only two domain names in that email, bricolage.cc and justatheory.com, and I couldn’t see how either one of them could have been identified as a virus host. But sure enough, a quick search of the CBL database revealed that the IP address for justatheory.com—and therefore my entire home LAN— had been blacklisted. I couldn’t imagine why; at first I thought maybe it was because of past instances of blog spam appearing here, but then George pointed out that the listing had been added on August 18. So I thought back…and realized that was just when I was engaging in my JavaScript debugging exercise.

Bloody Windows!

So I took steps to correct the problem:

  1. Update my router’s firmware. I’ve been meaning to do that for a while, anyway, and was hoping to get some new firewall features. Alas, no, but maybe I’ll be able to connect to a virtual PPTP network the next time I need to.

  2. Blocked all outgoing traffic from any computer on my LAN on port 25. I send email through my ISP, but use port 587 because I found in the last year that I couldn’t send mail on port 25 on some networks I’ve visited (such as in hotels). Now I know why: so that no network users inadvertently send out viruses from their Windows boxes! I’d rather just prevent certain hosts (my Windows boxen) from sending on port 25, but the router’s NAT is not that sophisticated. So I have to block them all.

  3. Rebooted the server back into Windows 98 and installed and ran Norton AntiVirus. This took forever, but found and fixed two instances of WIN32Mimail.l@mm and removed a spyware package.

  4. Rebooted back into Linux and cleared my IP address from the blacklist databases. I don’t expect to ever use that box for Windows again, now that I have the new Dimension.

The new box comes with Windows XP SP 2 and the Symantec tools, so I don’t expect it to be a problem, especially since it can’t use port 25. But this is a PITA, and I really feel for the IT departments that have to deal with this shit day in and day out.

What I don’t understand is how I got this virus, since I haven’t used Windows 98 in this computer in a long time. How long? Here’s a clue: When I clicked the link in Norton AntiVirus to see more information on WIN32Mimail.l@mm, Windows launched my default browser: Netscape Communicator! In addition, I don’t think I’ve used this box to check email since around 2000, and I never click on attachments from unknown senders, and never .exe or .scr files at all (my mail server automatically rejects incoming mail with such attachments, and has for at least a year).

But anyway, it’s all cleaned up now, and I’ve un-blacklisted my IP, so my emails should be deliverable again. But I’m left wondering what can be done about this problem. It’s easy for me to feel safe using my Mac, Linux, and FreeBSD boxes, but, really, what keeps the Virus and worm writers from targeting them? Nothing, right? Furthermore, what’s to stop the virus and worm writers from using port 587 to send their emails? Nothing, right? Once they do start using 587—and I’m sure they will—how will anyone be able to send mail to an SMTP server on one network from another network? Because you know that once 587 becomes a problem, network admins will shut down that port, too.

So what’s to be done about this? How can one successfully send mail to a server not on your local network? How will business people be able to send email through their corporate servers from hotel networks? I can see only a few options:

  • Require them to use a mail server on the local network. They’ll have to reconfigure their mail client to use it, and then change it back when they get back to the office. What a PITA. This might work out all right if there was some sort of DNS-like service for SMTP servers, but then there would then be nothing to prevent the virus software from using it, either.
  • You can’t. You have to authenticate onto the other network using a VPN. Lots of companies rely on this approach already, but smaller companies that don’t have the IT resources to set up a VPN are SOL. And folks just using their ISPs are screwed, too.
  • Create a new email protocol that’s inherently secure. This would require a different port, some sort of negotiation and authentication process, and a way for the hosting network to know that it’s cool to use. But this probably wouldn’t work, either, because then the virus software can also connect via such a protocol to a server that’s friendly to it, right?

None of these answers is satisfactory. I guess I’ll have to set up an authenticating SMTP server and a VPN for Kineticode once port 587 starts getting blocked. Anyone else got any brilliant solutions to this problem?

Looking for the comments? Try the old layout.

The End of Civilization

It’s the end of civilization as we know it.

Looking for the comments? Try the old layout.

iPod Threatens UK Military Security

20 GB iPod

Following up on my screed against the idea of the “iPod security threat”, James Duncan Davidson sent me a link to this story about how the UK military has decided that the iPod is a security threat.

“With USB devices, if you plug it straight into the computer you can bypass passwords and get right on the system,” RAF Wing Commander Peter D’Ardenne told Reuters.

“That’s why we had to plug that gap,” he said, adding that the policy was put into effect when the MoD switched to the USB-friendly Microsoft XP operating system over the past year.

Huh. Do you mean to tell me that if you plug into the USB port of a PC that no one is logged in to, you can get access to the contents of the PC without logging in? You know, that sounds more like a Windows security flaw than an iPod problem. I mean, it’s reasonable for the military to ban external media in order to prevent their personnel and contractors from copying sensitive data onto personal devices for unknown purposes. But this Windows security hole seems, well, huge.

And the truth is that these articles that single out the iPod as a security threat are being disingenuous, in that it’s much easier and much cheaper to use a USB Flash Drive. Furthermore, this banning of storage devices really only keeps honest people honest; those who really want to copy sensitive information to take home will figure out a way to do it if they’re motivated enough.

So yeah, highly sensitive security establishments should ban personal external storage devices to keep honest people honest, but really, they should also fix the real security problem with their operating system of choice.

Looking for the comments? Try the old layout.

Gartner: iPod is a Security Threat

20 GB iPod

Well, this is entertaining. It seems that the Gartner Group has decided that iPods are a significant security threat. I think it’s great that a company like that makes its money by telling people that, yes, you can copy files between your PC and your iPod, and that poses a serious security threat. Please.

The problem, of course, is not the iPod. Or digital cameras. Or floppies. Or CD burners. No, the problem is people. I prefer to build a company that trusts its employees. Novel concept, I know. So here’s the mantra: iPods aren’t security threats; employees are security threats.

Now, I had to think carefully about posting this, because it reminded me, suddenly, of the old gun nut statement that guns don’t kill people, people kill people. The reason why I’m willing to use it for the iPod and not guns, however, has to do with design. Guns are designed to kill. It kind of makes the statement moot. I mean, what would you expect people to do with them? iPods, however, are not designed to breach security. They’re designed to listen to music, to store files, to copy your calendar, etc. Now, whether an individual person decides to use the iPod in breach of a company’s security protocols is a matter independent of the iPod’s design and intended use.

So the mantra holds: iPods aren’t security threats; employees are security threats. But guns, yeah, they’re pretty much designed for killing.

Looking for the comments? Try the old layout.