Just a Theory

By David E. Wheeler

Posts about Security

Facebook Identity Theft

I get email:

Action Required: Confirm Your Facebook Account

Needless to say, I did not just register for Facebook.

Hrm. That’s weird, since my Facebook account dates back to 2007. Wait, there’s another email:

(219) 798-8705 added to your Facebook account

That’s not my phone number.

I’ve never seen that phone number before in my life. In fact, I removed my phone number from Facebook not long ago for privacy reasons. So what’s going on?

A quick look at the email address tells the story: It’s my Gmail address. Which I never use. Since I never use it, it’s not associated with any account, including Facebook. What’s happened is someone created a new Facebook account with my Gmail address. If I were to click the “Confirm your account” button, I would give someone else a valid Facebook account using my identity. It’d be even worse if I also approved the phone number. Doing so would cede complete control over this Facebook account to someone else. These kinds of messages are so common that it wouldn’t surprise me if some people just clicked those links and entered the confirmation code.

It’s only Facebook, you might think. But Facebook, isn’t “only” anything anymore. It’s a juggernaut. Facebook is so massive, and has promoted itself so heavily as an identity platform, that many organizations rely on it for identity proofing vias social logins. That means someone can “prove” they’re me by logging into that Facebook account. Via that foothold, they can gradually control other online accounts and effectively control the identity associated with my Gmail address.

That would not be good.

So after inspecting the email to make sure that its URLs are all actually on facebook.com, I visit the “please secure your account” link:

Secure your account?

This isn’t right…

This is a little worrying. It’s not that I think someone else is logging into my account. It’s that someone else has created an account using my Gmail address, and therefore a slice of my identity. Still, locking it down seems like a good idea. I hit the “Secure Account” button.

Secure your account?

What? Fuck no.

Now we’ve reached to the point point where I’m at risk of actually associating my physical photo ID with an account someone else created and controls? Fuck no. I don’t want to associate a photo ID with my real Facebook account, let alone one set up by some rando cybercriminal. Neither should you.

I close that browser tab, switch to another browser, and log into my real Facebook account. If the problem is that someone else wants proof of control over my Gmail address, I have to take it back. So I add my Gmail address to the settings for my real Facebook account, wait for the confirmation email, and hit the confirmation link.

Contact Email Confirmation

That should do it.

Great, that other account no longer has any control over my Gmail address. Hope it doesn’t have any other email addresses associated with it.

Oh, one more step: Facebook decided this new address should be my primary email address, so I had to change it back.

I don’t know how people without Facebook accounts would deal with this situation. Facebook needs to give people a way to say: “This is not me, this is not my account, I don’t want an account, please delete this bogus account.” It shouldn’t require uploading a photo ID, either.

Token Dimensions

C’est mois, in my second post on Tokenization for the iovation blog:

These requirements demonstrate the two dimensions of tokenization: reversibility and determinism. Reversible tokens may be detokenized to recover their original values. Deterministic tokens are always the same given the same inputs.

The point is to evaluate the fields private data fields to be tokenized in order to determine where in along these dimensions they fall, so that one can make informed choices when evaluating tokenization products and services.

iovation Tokenization

C’est mois, in the first of a series for the iovation blog:

Given our commitment to responsible data stewardship, as well as the invalidation of Safe Harbor and the advent of the GDPR, we saw an opportunity to reduce these modest but very real risks without impacting the efficacy of our services. A number of methodologies for data protection exist, including encryption, strict access control, and tokenization. We undertook the daunting task to determine which approaches best address data privacy compliance requirements and work best to protect customers and users — without unacceptable impact on service performance, cost to maintain infrastructure, or loss of product usability.

The post covers encryption, access control, and tokenization.

A Porous “Privacy Shield”

Glyn Moody, in Ars Technica, on the proposed replacement for the recently struck-down Safe Harbor framework:

However, with what seems like extraordinarily bad timing, President Obama has just made winning the trust of EU citizens even harder. As Ars reported last week, the Obama administration is close to allowing the NSA to share more of the private communications it intercepts with other federal agencies, including the FBI and the CIA, without removing identifying information first.

In other words, not only will the new Privacy Shield allow the NSA to continue to scoop up huge quantities of personal data from EU citizens, it may soon be allowed to share them widely. That’s unlikely to go down well with Europeans, the Article 29 Working Party, or the CJEU—all of which ironically increases the likelihood that the new Privacy Shield will suffer the same fate as the Safe Harbour scheme it has been designed to replace.

So let me get this straight. Under this proposal:

  • The NSA can continue to bulk collect EU citizen data.
  • That data may be shared with other agencies in the US government.
  • Said collection must fall under six allowed case, one of which is undefined “counter-terrorism” purposes. No one ever abused that kind of thing before.
  • The US claims there is no more bulk surveillance, except that there is under those six cases.
  • The appointed “independent ombudsman” to address complaints by EU citizens will be a single US Undersecretary of State.
  • Complaints can also be addressed to US companies housing EU citizen data, even though, in the absence of another Snowden-scale whistle-blowing, they may have no idea their data is being surveiled.

Color me skeptical that this would work, let alone not be thrown out by another case similar to the one that killed Safe Harbor.

I have a better idea. How about eliminating mass surveillance?

Do We Have Right to Security?

Rich Mogull:

Don’t be distracted by the technical details. The model of phone, the method of encryption, the detailed description of the specific attack technique, and even the feasibility are all irrelevant.

Don’t be distracted by the legal wrangling. By the timing, the courts, or the laws in question. Nor by politicians, proposed legislation, Snowden, or speeches at think tanks or universities.

Don’t be distracted by who is involved. Apple, the FBI, dead terrorists, or common drug dealers.

Everything, all of it, boils down to a single question.

Do we have a right to security?

How about we introduce a bill guaranteeing a right to security. Senator Wyden?

(Via Daring Fireball)

Anthem Breach Harms Consumers

Paul Roberts in Digital Guardian:

Whether or not harm has occurred to plaintiffs is critical for courts to decide whether the plaintiff has a right – or “standing” – to sue in the first place. But proving that data exposed in a breach has actually been used for fraud is notoriously difficult.

In her decision in the Anthem case, [U.S. District Judge Lucy] Koh reasoned that the theft of personal identification information is harm to consumers in itself, regardless of whether any subsequent misuse of it can be proven. Allegations of a “concrete and imminent threat of future harm” are enough to establish an injury and standing in the early stages of a breach suit, she said.

Seems like a no-brainer to me. Personal information is just that: personal. Organizations that collect and store personal information must take every step they can to protect it. Failure to do so harms their users, exposing them to increased risk of identity theft, fraud, surveillance, and abuse. It’s reasonable to expect that firms not be insulated from litigation for failing to protect user data.

Apple Challenges FBI Decryption Demand

Incredible post from Apple, signed by Tim Cook:

The government is asking Apple to hack our own users and undermine decades of security advancements that protect our customers — including tens of millions of American citizens — from sophisticated hackers and cybercriminals. The same engineers who built strong encryption into the iPhone to protect our users would, ironically, be ordered to weaken those protections and make our users less safe.

We can find no precedent for an American company being forced to expose its customers to a greater risk of attack. For years, cryptologists and national security experts have been warning against weakening encryption. Doing so would hurt only the well-meaning and law-abiding citizens who rely on companies like Apple to protect their data. Criminals and bad actors will still encrypt, using tools that are readily available to them.

I only wish there was a place to co-sign. Companies must do all they can to safeguard the privacy of their users, preferably such only users can unlock and access their personal information. It’s in the interest of the government to ensure that private data remain private. Forcing Apple to crack its own encryption sets a dangerous precedent likely to be exploited by cybercriminals for decades to come. Shame on the FBI.

How Does One Protect Online Ballot Box Stuffing?

I need to set up an online voting system. It needs to be more robust than a simple polling system, in order, primarily, to prevent ballot box stuffing. Of course I realize that it’s impossible to prevent ballot box stuffing by a determined individual, but what I want to prevent is scripted attacks and denial of service attacks. The features I’ve come up with so far to prevent attacks are:

  • Require site registration. You must be a registered user of the site in order to vote in an election, and of course, you can vote only once.
  • Ignore votes when cookies are disabled, although make it look like a successful submission.
  • Update result statistics periodically, rather than after every vote. This will make it difficult for an exploiter to tell if his votes are being counted.
  • Use a CAPTCHA to prevent scripted voting.
  • Send a new digest hidden in every request that must be sent back and checked against a server-side session in order to prevent “curl” attacks.
  • Log IP addresses for all votes. These can be checked later if ballot box stuffing is suspected (though we’ll have to ignore it if many users are behind a proxy server).

Of course someone behind a well-known proxy server who wants to repeatedly create a new user account using different email addresses and deleting his cookies before every vote could do some ballot box stuffing, but I think that the above features will minimize the risk. But I’m sure I’m forgetting things. What other steps should I take?

Leave a comment to let me know.

Looking for the comments? Try the old layout.

Script Kitties

Julie was reading an article about Internet security in The New Yorker the other day, when she suddenly turned to me and said, “Oh! All this time when I heard you say ‘script kiddies’, what I heard was ‘k-i-t-t-i-e-s’!”

Fear the feline crackers.

Looking for the comments? Try the old layout.

Windows Virus Hell

So to finish up development and testing of Test.Harness.Browser in IE 6 last week, I rebooted my Linux server (the one running justatheory.com) into Windows 98, got everything working, and rebooted back into Linux. I felt that the hour or two’s worth of downtime for my site was worth it to get the new version of Test.Simple out, and although I had ordered a new Dell, didn’t want to wait for it. And it worked great; I’m very pleased with Test.Simple 0.20.

But then, in unrelated news, I released Bricolage 1.9.0, the first development release towards Bricolage 1.10, which I expect to ship next month. One of the things I’m most excited about in this release is the new PHP templating support. So on George Schlossnagle’s advice, I sent an email to webmaster@php.net. It bounced. It was late on Friday, and I’m so used to bounces being problems on the receiving end, that I simply forwarded it to George with the comment, “What the?” and went to fix dinner for company.

Then this morning I asked George, via IM, if he’d received my email. He hadn’t. I sent it again; no dice. So he asked me to paste the bounce, and as I did so, looked at it more carefully. It had this important tidbit that I’d failed to notice before:

140.211.166.39 failed after I sent the message.
Remote host said: 550-5.7.1 reject content [xbl]
550 See http://master.php.net/mail/why.php?why=SURBL

“That’s curious,” I thought, and went to read the page in question. It said I likely had a domain name in my email associated with a blacklisted IP address. Well, there were only two domain names in that email, bricolage.cc and justatheory.com, and I couldn’t see how either one of them could have been identified as a virus host. But sure enough, a quick search of the CBL database revealed that the IP address for justatheory.com—and therefore my entire home LAN— had been blacklisted. I couldn’t imagine why; at first I thought maybe it was because of past instances of blog spam appearing here, but then George pointed out that the listing had been added on August 18. So I thought back…and realized that was just when I was engaging in my JavaScript debugging exercise.

Bloody Windows!

So I took steps to correct the problem:

  1. Update my router’s firmware. I’ve been meaning to do that for a while, anyway, and was hoping to get some new firewall features. Alas, no, but maybe I’ll be able to connect to a virtual PPTP network the next time I need to.

  2. Blocked all outgoing traffic from any computer on my LAN on port 25. I send email through my ISP, but use port 587 because I found in the last year that I couldn’t send mail on port 25 on some networks I’ve visited (such as in hotels). Now I know why: so that no network users inadvertently send out viruses from their Windows boxes! I’d rather just prevent certain hosts (my Windows boxen) from sending on port 25, but the router’s NAT is not that sophisticated. So I have to block them all.

  3. Rebooted the server back into Windows 98 and installed and ran Norton AntiVirus. This took forever, but found and fixed two instances of WIN32Mimail.l@mm and removed a spyware package.

  4. Rebooted back into Linux and cleared my IP address from the blacklist databases. I don’t expect to ever use that box for Windows again, now that I have the new Dimension.

The new box comes with Windows XP SP 2 and the Symantec tools, so I don’t expect it to be a problem, especially since it can’t use port 25. But this is a PITA, and I really feel for the IT departments that have to deal with this shit day in and day out.

What I don’t understand is how I got this virus, since I haven’t used Windows 98 in this computer in a long time. How long? Here’s a clue: When I clicked the link in Norton AntiVirus to see more information on WIN32Mimail.l@mm, Windows launched my default browser: Netscape Communicator! In addition, I don’t think I’ve used this box to check email since around 2000, and I never click on attachments from unknown senders, and never .exe or .scr files at all (my mail server automatically rejects incoming mail with such attachments, and has for at least a year).

But anyway, it’s all cleaned up now, and I’ve un-blacklisted my IP, so my emails should be deliverable again. But I’m left wondering what can be done about this problem. It’s easy for me to feel safe using my Mac, Linux, and FreeBSD boxes, but, really, what keeps the Virus and worm writers from targeting them? Nothing, right? Furthermore, what’s to stop the virus and worm writers from using port 587 to send their emails? Nothing, right? Once they do start using 587—and I’m sure they will—how will anyone be able to send mail to an SMTP server on one network from another network? Because you know that once 587 becomes a problem, network admins will shut down that port, too.

So what’s to be done about this? How can one successfully send mail to a server not on your local network? How will business people be able to send email through their corporate servers from hotel networks? I can see only a few options:

  • Require them to use a mail server on the local network. They’ll have to reconfigure their mail client to use it, and then change it back when they get back to the office. What a PITA. This might work out all right if there was some sort of DNS-like service for SMTP servers, but then there would then be nothing to prevent the virus software from using it, either.
  • You can’t. You have to authenticate onto the other network using a VPN. Lots of companies rely on this approach already, but smaller companies that don’t have the IT resources to set up a VPN are SOL. And folks just using their ISPs are screwed, too.
  • Create a new email protocol that’s inherently secure. This would require a different port, some sort of negotiation and authentication process, and a way for the hosting network to know that it’s cool to use. But this probably wouldn’t work, either, because then the virus software can also connect via such a protocol to a server that’s friendly to it, right?

None of these answers is satisfactory. I guess I’ll have to set up an authenticating SMTP server and a VPN for Kineticode once port 587 starts getting blocked. Anyone else got any brilliant solutions to this problem?

Looking for the comments? Try the old layout.

The End of Civilization

It’s the end of civilization as we know it.

Looking for the comments? Try the old layout.

iPod Threatens UK Military Security

20 GB iPod

Following up on my screed against the idea of the “iPod security threat”, James Duncan Davidson sent me a link to this story about how the UK military has decided that the iPod is a security threat.

“With USB devices, if you plug it straight into the computer you can bypass passwords and get right on the system,” RAF Wing Commander Peter D’Ardenne told Reuters.

“That’s why we had to plug that gap,” he said, adding that the policy was put into effect when the MoD switched to the USB-friendly Microsoft XP operating system over the past year.

Huh. Do you mean to tell me that if you plug into the USB port of a PC that no one is logged in to, you can get access to the contents of the PC without logging in? You know, that sounds more like a Windows security flaw than an iPod problem. I mean, it’s reasonable for the military to ban external media in order to prevent their personnel and contractors from copying sensitive data onto personal devices for unknown purposes. But this Windows security hole seems, well, huge.

And the truth is that these articles that single out the iPod as a security threat are being disingenuous, in that it’s much easier and much cheaper to use a USB Flash Drive. Furthermore, this banning of storage devices really only keeps honest people honest; those who really want to copy sensitive information to take home will figure out a way to do it if they’re motivated enough.

So yeah, highly sensitive security establishments should ban personal external storage devices to keep honest people honest, but really, they should also fix the real security problem with their operating system of choice.

Looking for the comments? Try the old layout.

Gartner: iPod is a Security Threat

20 GB iPod

Well, this is entertaining. It seems that the Gartner Group has decided that iPods are a significant security threat. I think it’s great that a company like that makes its money by telling people that, yes, you can copy files between your PC and your iPod, and that poses a serious security threat. Please.

The problem, of course, is not the iPod. Or digital cameras. Or floppies. Or CD burners. No, the problem is people. I prefer to build a company that trusts its employees. Novel concept, I know. So here’s the mantra: iPods aren’t security threats; employees are security threats.

Now, I had to think carefully about posting this, because it reminded me, suddenly, of the old gun nut statement that guns don’t kill people, people kill people. The reason why I’m willing to use it for the iPod and not guns, however, has to do with design. Guns are designed to kill. It kind of makes the statement moot. I mean, what would you expect people to do with them? iPods, however, are not designed to breach security. They’re designed to listen to music, to store files, to copy your calendar, etc. Now, whether an individual person decides to use the iPod in breach of a company’s security protocols is a matter independent of the iPod’s design and intended use.

So the mantra holds: iPods aren’t security threats; employees are security threats. But guns, yeah, they’re pretty much designed for killing.

Looking for the comments? Try the old layout.