Opsec examples: 6 spectacular operational security failures

Every day, most of us leave trails of online breadcrumbs behind us, disconnected pieces of data that a determined sleuth could connect to learn about our activities and perhaps break through our veil of anonymity. The struggle to prevent attackers from putting these puzzle pieces together is known as operational security (opsec).

Most of us don’t think too much about all this: nobody’s trying to track us down, and if they did, the consequences wouldn’t be too worrisome. But there are those for whom the stakes are much higher. Would it be so bad if someone recognized the handles of your anonymous social media accounts as the name one of your big work projects or the subject of your senior thesis? It might be if you were the director of the FBI. Does it matter if the selfies you upload to social media have location data embedded in them, or if your fitness tracker sends anonymized data about your jogging route to its manufacturer? It might if you’re a soldier on a secret military base or in a country where your government swears it hasn’t sent any troops.

Hackers and cybercriminals—of both the freelance and state-sponsored variety—are generally quick to exploit any failures in opsec made by potential victims. That’s why it’s perhaps surprising that these malicious actors often themselves fail to cover their online tracks, whether due to arrogance, incompetence, or some combination of the two. You can view these incidents as morality plays in which the bad guys get their comeuppance, but maybe it’s better to think about them as cautionary tales: you might not be spying for the Chinese government or running an online drug market, but you could fall into the same mistakes that these cybercriminals did, to your peril.

All roads lead back to Dread Pirate Roberts

For a few years in the early 2010s, the Silk Road was source of fascination and frustration for computer security researchers and law enforcement alike. An underground marketplace where users could trade cryptocurrency for drugs, weapons, and other illegal goods and services, it brought the idea of the “dark web,” along with knowledge about Tor and bitcoin, into the consciousness of regular people. It seemed to truly herald a future where anonymous online transactions would make the world a more dangerous (or exciting, depending on your point of view) place.

There was just one hitch: it was less anonymous that it might’ve seemed. The Silk Road’s founder and admin, who went by the handle Dread Pirate Roberts, was soon identified as a Texan named Ross Ulbricht and tracked down and arrested—not because his anonymizing technology failed, but because, it turns out, he voluntarily left evidence of his identity across the internet. In 2011, a user with the handle “altoid” posted on a bitcoin forum about a new hidden service that would be an “anonymous amazon.com,” linking to a site at silkroad420.wordpress.com. Months later, the same user posted looking to hire an “IT pro in the bitcoin community,” and urged candidates to write to rossulbricht@gmail.com. That Gmail address was in turn connected to a Google+ account that posted content about Austrian economic theory, a set of libertarian ideas that was also the subject of posts on Silk Road from the Dread Pirate Roberts.

If that wasn’t enough, in early 2012, a StackOverflow user with the handle “Ross Ulbricht” posted a query looking for help connecting to a hidden Tor service using PHP—a programming technique that, it turned out, the Silk Road site eventually used. Ulbricht changed that username less than a minute after posting the query, but the original remained on StackOverflow’s servers. Ulbricht was tracked down and arrested in late 2013, and is currently serving a life sentence in prison.

Marketplaces of bad ideas

With Ulbricht being both a pioneer in the dark web marketplace business and also a prime example of terrible opsec, you’d think subsequent dark web merchants would have taken the hint from his fate and cleaned up their own act. But some seemed determined to repeat his mistakes.

For instance, in 2017, authorities in the U.S. and the Netherlands swooped in to shut down AlphaBay, another dark web drug market, and arrested Alexandre Cazes, its kingpin. Law enforcement officials noted that emails AlphaBay users received when they signed up or reset their password contained the email address Pimp_Alex_91@hotmail.com in their headers. (It’s not clear which part of that email should’ve been more embarrassing for a supposed criminal mastermind, “Pimp_Alex_91” or “hotmail.com.”) That email was connected to some 2008 posts on an online tech forum that from a user with the handle Alpha02 (also the username of the AlphaBay administrator; reused usernames are a common opsec failure) and Cazes’s real name.

Some of the individual vendors on AlphaBay were brought down by similar mistakes. For instance, Emil Babadjov sold fentanyl, heroin, and meth on the site with an account connected to the email address babadjov@gmail.com; this led the FBI to a Coinbase account and a Facebook profile in the cleverly backwards name of “Lime Vojdabab.” Jose Robert Porras, meanwhile, was much more circumspect with his identity information, but he made the mistake of posting a picture of his hand holding marijuana on his AlphaBay page. The photo quality was high enough that investigators were able to see his fingerprints and match them to prints they had on file.

Spies: Just like us?

Perhaps it isn’t a complete surprise that online drug dealers aren’t the most circumspect people in their conduct. But you’d think that state-sponsored hackers—presumably recruited for their skills in cybersecurity and all too familiar with the opsec failures of their victims—would be less likely to slip up when it comes to their own identities. However, a number of recent high-profile examples have shown that not to be the case.

Take, for instance, the Chinese military hacking group fearsome enough to be known to the U.S. as APT1 (that stands for “advanced persistent threat”). Despite its reputation, this group made some of the same mistakes we saw in our dark web examples—for instance, reusing usernames across sites. One APT1 member actually signed the source code he wrote for the group’s hacking tools with the nickname “Ugly Gorilla”. This handle in turn could be connected to posts on programming that were associated with his real name, Wang Dong. Some of those sites themselves suffered data breaches, with information about users being posted publicly on the dark web, which allowed U.S. researchers to connect Wang to a specific IP address that, it turns out, APT1 used as well.

In general, the group used predictable naming conventions for their users, code, and even passwords. Another way in which they were so consistent that they screwed up their own opsec: their working hours. Most timestamped activity associated with the group took place during business hours in Beijing. That not only pointed security researchers to their location, but also indicated that they were professionals rather than activists or enthusiasts hacking during their free time.

Giving away the kingdom

We noted with APT1 that one technique for exploiting opsec holes is to track down the IP addresses of servers associated with a group you’re tracking. That can tell you a lot about your target, and if you’re lucky, you might be able to do a little counterhacking now that you have part of their infrastructure in your sights.

Or maybe you’ll get really lucky, and those servers will be open to the world.

That was the case in two recent counterespionage scenarios. One involved an Iranian-backed hacking group known as APT35, aka Charming Kitten. The group was storing gigabytes of data exfiltrated from U.S. and Greek military systems on a cloud server—but the security settings for that server were misconfigured, so when security researchers tracked it down, they were able to find all sorts of fascinating files. Perhaps none were more important for understanding APT35’s motivations and capabilities than a series of screen recordings showing members of the group engaging in hacking activities. These appear to be demonstration videos, possibly for training purposes to show new members the group’s techniques.

Meanwhile, another group—identified as a “state actor” by researchers, although the state in question was not named—was discovered by association with a command-and-control server for a novel piece of mobile malware. Again, good opsec would dictate that any such server be locked tight, and that it certainly not contain any data that might be identifying or traceable.

This group apparently didn’t feel such opsec hygiene was necessary. The command-and-control server hosted a treasure trove of data, including an extensive set of WhatsApp messages in which members of the group debated on how best to use their government-supplied budget—whether they should build their own malware that could exfiltrate data from Android or iOS devices, or buy it from one of a number of underground vendors.  

We know all this because they ultimately chose to develop the malware in-house—and, in a delightfully self-referential twist, tested it on one of their own phones, extracting the WhatsApp messaging data in which they discussed the malware’s development. This is the very set of messages we’ve been talking about, of course. It stands as a warning to IT pros everywhere that you can be savvy enough to write very clever and effective code, and foolish enough botch your opsec completely.

Maybe you feel like you don’t have anything to hide, but why take the risk of ending up in an article like this one?