Monero Emerges As Crypto of Choice For Cybercriminals

An anonymous reader quotes a report from Ars Technica: While bitcoin leaves a visible trail of transactions on its underlying blockchain, the niche “privacy coin” monero was designed to obscure the sender and receiver, as well as the amount exchanged. As a result, it has become an increasingly sought-after tool for criminals such as ransomware gangs, posing new problems for law enforcement. “We’ve seen ransomware groups specifically shifting to monero,” said Bryce Webster-Jacobsen, director of intelligence at GroupSense, a cyber security group that has helped a growing number of victims pay out ransoms in monero. “[Cyber criminals] have recognized the ability for mistakes to be made using bitcoin that allow blockchain transactions to reveal their identity.”

Russia-linked REvil, the notorious ransomware group believed to be behind the attack this month on meatpacker JBS, has removed the option of paying in bitcoin this year, demanding monero only, according to Brett Callow, threat analyst at Emsisoft. Meanwhile, both DarkSide, the group blamed for the Colonial Pipeline hack, and Babuk, which was behind the attack on Washington DC police this year, allow payments in either cryptocurrency but charge a 10 to 20 percent premium to victims paying in riskier bitcoin, experts say. Justin Ehrenhofer, a cryptocurrency compliance expert and member of the monero developer community, said that at the beginning of 2020, its use by ransomware gangs was “a rounding error.” Today he estimates that about 10 to 20 percent of ransoms are paid in monero and that the figure will probably rise to 50 percent by the end of the year.

How to Interpret the Various Sections of the Cybersecurity Executive Order

The Biden administration released a new executive order for cybersecurity on May 12, 2021. Although many know the overarching message of the executive order, it’s also important to know the specific details outlined in each section. As our CEO Sam King remarked, “It gets really specific about the types of security controls they want organizations to adhere to and government agencies to take into account when they’re looking to do business with software vendors in particular.”
As we go through each section, we will intersperse thoughts from Sam King and Chris Wysopal, co-founder and CTO at Veracode, as well as thoughts and statements from Forrester analysts, Allie Mellen, Jeff Pollard, Steve Turner, and Sandy Carielli, from their recently aired webinar, A Deep Dive Into The Executive Order On Cybersecurity.
Section 1
The first section talks about the overarching policy in the executive order, stating:
“The United States faces persistent and increasingly sophisticated malicious cyber campaigns that threaten the public sector, the private sector, and ultimately the American people’s security and privacy.  The Federal Government must improve its efforts to identify, deter, protect against, detect, and respond to these actions and actors.”
It sets the framework for the order, calling “prevention, detection, assessment, and remediation of cyber incidents” a top priority. And if the Federal Government takes ownership of national cybersecurity, it will not only improve security in the public sector, it should also increase regulations in the private sector.
Section 2
Section 2 removes the barriers to sharing threat information. In other words, IT Service Providers can no longer hide information pertaining to breaches – even due to contractual obligations. And they will have to disclose this information in a timely manner. As Turner expresses in the Forrester webinar, “this section really opens up the door for all of the further technology improvements and the way that we want to improve security holistically as we go down toward significantly modernizing the way that the federal government does cybersecurity.”
Section 3
Speaking of modernizing the way that the federal government handles cybersecurity, section 3 is specifically aimed at addressing today’s sophisticated cyber threat environment. It sets the groundwork for moving the Federal Government to secure cloud services and a zero-trust architecture. As part of the zero-trust policy, vendors providing IT services to the government will have to deploy multifactor authentication and encryption in a specified time period.
Section 4
Section 4 enhances software supply chain security. It sets a new precedent for the development of software sold to the government. Developers will be expected to have increased oversight of their software and they will be required to make security data public. Wysopal found “the scope of the software supply chain requirements to be the most notable aspect” of the new executive order, stating, “It’s very comprehensive – all the different aspects of delivering secure software that hasn’t been tampered with by attackers, that has had software assurance practices built into the development pipeline, and notification to the federal government if a vendor has been compromised – because there’s a likelihood that the software was the target.”
This section also proposes that software be ranked or labeled based on its security. As Carielli explains in the Forrester webinar, the software will be labeled with a ranking – like energy star of good housekeeping – proving a vendor’s security standing. Wysopal is a strong proponent of the labeling program, comparing it to programs used in the UK and Singapore on IoT devices. He sees it as a good way to incentivize vendors to secure their products. King agrees, calling the pilot program a great way to increase transparency and accountability. 
Sections 5 and 6
Despite all of these new steps in place to prevent cyber incidents, it’s still possible for a breach to occur. That’s where section 5 comes into play. Section 5 establishes a review board – similar to the National Transportation Safety Board – to analyze cyber incidents and propose steps for future avoidance, which Wysopal praises as a welcome addition. There will also be a standard playbook – outlined in section 6 – that will provide response tips for cyberattacks.
Section 7
Section 7 “improves the ability to detect malicious cyber activity on federal networks by enabling a government-wide endpoint detection and response system and improved information sharing within the Federal government.” And section 8 improves investigation and remediation by requiring federal agencies to maintain a cybersecurity event log.
Sections 8, 9, and 10
The final three sections call for the adoption of the National Security Systems requirements laid out in the Executive Order and provide any outstanding definitions or provisions. 
Although the Forrester analysts outlined some potential issues with the executive order during their webinar, like the extra budget and resources that will be needed to fund the cybersecurity requirements, they also noted the potential for the executive order to have a positive effect on the private sector. Pollard estimates that the private sector will likely follow suit in requiring IT vendors to release breach data and follow a zero-trust architecture. He also predicts the private sector will require increased security in the software development lifecycle.
Wysopal recently stated in his blog New Cybersecurity Executive Order: What You Need to Know, “The US government won’t be the last entity demanding more security transparency from software vendors. It’s a sign of what’s to come for any organization creating software in any industry.”
What do you think? Will the requirements of the executive order trickle down the private sector?
Keep an eye out for our upcoming blog where Chris Wysopal, co-founder and CTO of Veracode, will give his opinions on how the executive order will impact the consumer market.
In the meantime, visit the Veracode Executive Order page for additional insight on Biden’s executive order.

Complete and continuous remote worker visibility with Network Visibility Module data as a primary telemetry source

Navigating the new normal

Organizations are currently facing new challenges related to monitoring and securing their remote workforces. Many users don’t always use their VPNs while working remotely – this creates gaps in visibility that increase organizational risks. In the past, many organizations viewed these occasional gaps in visibility as negligible risks due to low overall volumes of non-VPN-connected remote work. However, today, that’s no longer the case, as organizations and workers have been thrust into a new “work from home (WFH) era.”. This not only led to an explosion in the need for remote access from anywhere and on anything – effectively expanding threat surfaces and concurrently increasing opportunities for attackers – but – as if that weren’t enough – organizations were also hit with a wide-ranging and prolonged employee activity visibility blackout. This left security teams scrambling to adapt as this sudden “visibility blackout” further exacerbated overall organizational security risk levels.

Automotive Software Safety and Security Still Needs Improvement

A recent blog post, “Automotive software defects”, from Phil Koopman, Carnegie Mellon professor and author of “Better Embedded Software”, talks about increasing number of software defects in automotive software that are significant safety hazards. The post points out an increase in potentially life-threatening software defects being reported yet there is a general resistance in the industry to dealing with the quality, safety and security of the software.

Addressing the cybersecurity skills gap through neurodiversity

Cat Contillo is a proud queer autistic and a Threat Analyst II at Huntress. She’s passionate about LGBTQ+ rights, autism, neurodiversity, DEI and cybersecurity.

Addressing the skills gap and strengthening your own security team means bringing in different minds and perspectives — and that starts with embracing neurodiversity. To even have a chance at closing the cybersecurity skills gap, we need people with a variety of different abilities and thought processes. But did you know that there’s an untapped potential in individuals who are neurodivergent?

CloudHSM best practices to maximize performance and avoid common configuration pitfalls

AWS CloudHSM provides fully-managed hardware security modules (HSMs) in the AWS Cloud. CloudHSM automates day-to-day HSM management tasks including backups, high availability, provisioning, and maintenance. You’re still responsible for all user management and application integration.

In this post, you will learn best practices to help you maximize the performance of your workload and avoid common configuration pitfalls in the following areas:

Administration of CloudHSM

The administration of CloudHSM includes those tasks necessary to correctly set up your CloudHSM cluster, and to manage your users and keys in a secure and efficient manner.

Initialize your cluster with a customer key pair

To initialize a new CloudHSM cluster, you will first create a new RSA key pair, which we will call the customer key pair. First, generate a self-signed certificate using the customer key pair. Then, you sign the cluster’s certificate by using the customer public key as described in Initialize the Cluster section in the AWS CloudHSM User Guide. The resulting signed cluster certificate, as shown in Figure 1, identifies your CloudHSM cluster as yours.

Figure 1: CloudHSM key hierarchy and customer generated keys

It’s important to use best practices when you generate and store the customer private key. The private key is a binding secret between you and your cluster, and cannot be rotated. We therefore recommend that you create the customer private key in an offline HSM and store the HSM securely. Any entity (organization, person, system) that demonstrates possession of the customer private key will be considered an owner of the cluster and the data it contains. In this procedure, you are using the customer private key to claim a new cluster, but in the future you could also use it to demonstrate ownership of the cluster in scenarios such as cloning and migration.

Manage your keys with crypto user (CU) accounts

The HSMs provided by CloudHSM support different types of HSM users, each with specific entitlements. Crypto users (CUs) generate, manage, and use keys. If you’ve worked with HSMs in the past, you can think of CUs as similar to partitions. However, CU accounts are more flexible. The CU that creates a key owns the key, and can share it with other CUs. The shared key can be used for operations in accordance with the key’s attributes, but the CU that the key was shared with cannot manage it – that is, they cannot delete, wrap, or re-share the key.

From a security standpoint, it is a best practice for you to have multiple CUs with different scopes. For example, you can have different CUs for different classes of keys. As another example, you can have one CU account to create keys, and then share these keys with one or more CU accounts that your application leverages to utilize keys. You can also have multiple shared CU accounts, to simplify rotation of credentials in production applications.

Warning: You should be careful when deleting CU accounts. If the owner CU account for a key is deleted, the key can no longer be used. You can use the cloudhsm_mgmt_util tool command findAllKeys to identify which keys are owned by a specified CU. You should rotate these keys before deleting a CU. As part of your key generation and rotation scheme, consider using labels to identify current and legacy keys.

Manage your cluster by using crypto officer (CO) accounts

Crypto officers (COs) can perform user management operations including change password, create user, and delete user. COs can also set and modify cluster policies.

Important: When you add or remove a user, or change a password, it’s important to ensure that you connect to all the HSMs in a cluster, to keep them synchronized and avoid inconsistencies that can result in errors. It is a best practice to use the Configure tool with the –m option to refresh the cluster configuration file before making mutating changes to the cluster. This helps to ensure that all active HSMs in the cluster are properly updated, and prevents the cluster from becoming desynchronized. You can learn more about safe management of your cluster in the blog post Understanding AWS CloudHSM Cluster Synchronization. You can verify that all HSMs in the cluster have been added by checking the /opt/cloudhsm/etc/cloudhsm_mgmt_util.cfg file.

After a password has been set up or updated, we strongly recommend that you keep a record in a secure location. This will help you avoid lockouts due to erroneous passwords, because clients will fail to log in to HSM instances that do not have consistent credentials. Depending on your security policy, you can use AWS Secrets Manager, specifying a customer master key created in AWS Key Management Service (KMS), to encrypt and distribute your secrets – secrets in this case being the CU credentials used by your CloudHSM clients.

Use quorum authentication

To prevent a single CO from modifying critical cluster settings, a best practice is to use quorum authentication. Quorum authentication is a mechanism that requires any operation to be authorized by a minimum number (M) of a group of N users and is therefore also known as M of N access control.

To prevent lock-outs, it’s important that you have at least two more COs than the M value you define for the quorum minimum value. This ensures that if one CO gets locked out, the others can safely reset their password. Also be careful when deleting users, because if you fall under the threshold of M, you will be unable to create new users or authorize any other operations and will lose the ability to administer your cluster.

If you do fall below the minimum quorum required (M), or if all of your COs end up in a locked-out state, you can revert to a previously known good state by restoring from a backup to a new cluster. CloudHSM automatically creates at least one backup every 24 hours. Backups are event-driven. Adding or removing HSMs will trigger additional backups.

Configuration

CloudHSM is a fully managed service, but it is deployed within the context of an Amazon Virtual Private Cloud (Amazon VPC). This means there are aspects of the CloudHSM service configuration that are under your control, and your choices can positively impact the resilience of your solutions built using CloudHSM. The following sections describe the best practices that can make a difference when things don’t go as expected.

Use multiple HSMs and Availability Zones to optimize resilience

When you’re optimizing a cluster for high availability, one of the aspects you have control of is the number of HSMs in the cluster and the Availability Zones (AZs) where the HSMs get deployed. An AZ is one or more discrete data centers with redundant power, networking, and connectivity in an AWS Region, which can be formed of multiple physical buildings, and have different risk profiles between them. Most of the AWS Regions have three Availability Zones, and some have as many as six.

AWS recommends placing at least two HSMs in the cluster, deployed in different AZs, to optimize data loss resilience and improve the uptime in case an individual HSM fails. As your workloads grow, you may want to add extra capacity. In that case, it is a best practice to spread your new HSMs across different AZs to keep improving your resistance to failure. Figure 2 shows an example CloudHSM architecture using multiple AZs.

Figure 2: CloudHSM architecture using multiple AZs

When you create a cluster in a Region, it’s a best practice to include subnets from every available AZ of that Region. This is important, because after the cluster is created, you cannot add additional subnets to it. In some Regions, such as Northern Virginia (us-east-1), CloudHSM is not yet available in all AZs at the time of writing. However, you should still include subnets from every AZ, even if CloudHSM is currently not available in that AZ, to allow your cluster to use those additional AZs if they become available.

Increase your resiliency with cross-Region backups

If your threat model involves a failure of the Region itself, there are steps you can take to prepare. First, periodically create copies of the cluster backup in the target Region. You can see the blog post How to clone an AWS CloudHSM cluster across regions to find an extensive description of how to create copies and deploy a clone of an active CloudHSM cluster.

As part of your change management process, you should keep copies of important files, such as the files stored in /opt/cloudhsm/etc/. If you customize the certificates that you use to establish communication with your HSM, you should back up those certificates as well. Additionally, you can use configuration scripts with the AWS Systems Manager Run Command to set up two or more client instances that use exactly the same configuration in different Regions.

The managed backup retention feature in CloudHSM automatically deletes out-of-date backups for an active cluster. However, because backups that you copy across Regions are not associated with an active cluster, they are not in scope of managed backup retention and you must delete out-of-date backups yourself. Backups are secure and contain all users, policies, passwords, certificates and keys for your HSM, so it’s important to delete older backups when you rotate passwords, delete a user, or retire keys. This ensures that you cannot accidentally bring older data back to life by creating a new cluster that uses outdated backups.

The following script shows you how to delete all backups older than a certain point in time. You can also download the script from S3.

#!/usr/bin/env python # # Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. # SPDX-License-Identifier: MIT-0 # # Reference Links: # https://docs.python.org/3/library/datetime.html#strftime-and-strptime-behavior # https://docs.python.org/3/library/re.html # https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/cloudhsmv2.html#CloudHSMV2.Client.describe_backups # https://docs.python.org/3/library/datetime.html#datetime-objects # https://pypi.org/project/typedate/ # https://pypi.org/project/pytz/ # import boto3, time, datetime, re, argparse, typedate, json def main(): bkparser = argparse.ArgumentParser(prog='backdel', usage='%(prog)s [-h] --region --clusterID [--timestamp] [--timezone] [--deleteall] [--dryrun]', description='Deletes CloudHSMv2 backups from a given point in time\n') bkparser.add_argument('--region', metavar='-r', dest='region', type=str, help='region where the backups are stored', required=True) bkparser.add_argument('--clusterID', metavar='-c', dest='clusterID', type=str, help='CloudHSMv2 cluster_id for which you want to delete backups', required=True) bkparser.add_argument('--timestamp', metavar='-t', dest='timestamp', type=str, help="Enter the timestamp to filter the backups that should be deleted:\n Backups older than the timestamp will be deleted.\n Timestamp ('MM/DD/YY', 'MM/DD/YYYY' or 'MM/DD/YYYY HH:mm')", required=False) bkparser.add_argument('--timezone', metavar='-tz', dest='timezone', type=typedate.TypeZone(), help="Enter the timezone to adjust the timestamp.\n Example arguments:\n --timezone '-0200' , --timezone '05:00' , --timezone GMT #If the pytz module has been installed ", required=False) bkparser.add_argument('--dryrun', dest='dryrun', action='store_true', help="Set this flag to simulate the deletion", required=False) bkparser.add_argument('--deleteall', dest='deleteall', action='store_true', help="Set this flag to delete all the back ups for the specified cluster", required=False) args = bkparser.parse_args() client = boto3.client('cloudhsmv2', args.region) cluster_id = args.clusterID timestamp_str = args.timestamp timezone = args.timezone dry_true = args.dryrun delall_true = args.deleteall delete_all_backups_before(client, cluster_id, timestamp_str, timezone, dry_true, delall_true) def delete_all_backups_before(client, cluster_id, timestamp_str, timezone, dry_true, delall_true, max_results=25): timestamp_datetime = None if delall_true == True and not timestamp_str: print("\nAll backups will be deleted...\n") elif delall_true == True and timestamp_str: print("\nUse of incompatible instructions: --timestamp and --deleteall cannot be used in the same invocation\n") return elif not timestamp_str : print("\nParameter missing: --timestamp must be defined\n") return else : # Valid formats: 'MM/DD/YY', 'MM/DD/YYYY' or 'MM/DD/YYYY HH:mm' if re.match(r'^\d\d/\d\d/\d\d\d\d \d\d:\d\d$', timestamp_str): try: timestamp_datetime = datetime.datetime.strptime(timestamp_str, "%m/%d/%Y %H:%M") except Exception as e: print("Exception: %s" % str(e)) return elif re.match(r'^\d\d/\d\d/\d\d\d\d$', timestamp_str): try: timestamp_datetime = datetime.datetime.strptime(timestamp_str, "%m/%d/%Y") except Exception as e: print("Exception: %s" % str(e)) return elif re.match(r'^\d\d/\d\d/\d\d$', timestamp_str): try: timestamp_datetime = datetime.datetime.strptime(timestamp_str, "%m/%d/%y") except Exception as e: print("Exception: %s" % str(e)) return else: print("The format of the specified timestamp is not supported by this script. Aborting...") return print("Backups older than %s will be deleted...\n" % timestamp_str) try: response = client.describe_backups(MaxResults=max_results, Filters={"clusterIds": [cluster_id]}, SortAscending=True) except Exception as e: print("DescribeBackups failed due to exception: %s" % str(e)) return failed_deletions = [] while True: if 'Backups' in response.keys() and len(response['Backups']) > 0: for backup in response['Backups']: if timestamp_str and not delall_true: if timezone != None: timestamp_datetime = timestamp_datetime.replace(tzinfo=timezone) else: timestamp_datetime = timestamp_datetime.replace(tzinfo=backup['CreateTimestamp'].tzinfo) if backup['CreateTimestamp'] > timestamp_datetime: break print("Deleting backup %s whose creation timestamp is %s:" % (backup['BackupId'], backup['CreateTimestamp'])) try: if not dry_true : delete_backup_response = client.delete_backup(BackupId=backup['BackupId']) except Exception as e: print("DeleteBackup failed due to exception: %s" % str(e)) failed_deletions.append(backup['BackupId']) print("Sleeping for 1 second to avoid throttling. \n") time.sleep(1) if 'NextToken' in response.keys(): try: response = client.describe_backups(MaxResults=max_results, Filters={"clusterIds": [cluster_id]}, SortAscending=True, NextToken=response['NextToken']) except Exception as e: print("DescribeBackups failed due to exception: %s" % str(e)) else: break if len(failed_deletions) > 0: print("FAILED backup deletions: " + failed_deletions) if __name__== "__main__": main()

What is Application Security Risk?

If you have ever considered how hackers and other cyber attackers on the internet use different paths to harm systems and software, you already know a bit about what application risk means. While understanding the essence of risk—and what it can do to the business—is critical, it’s also important to visualize how the notion of security risk is impacted and affected by other areas of threat and vulnerability. Much like a mathematical equation, the relationship between threat, vulnerability and risk sits at the core of application development and security.

Analyzing SonicWall’s Unsuccessful Fix for CVE-2020-5135

Back in September 2020, I configured a SonicWall network security appliance to act as a VPN gateway between physical devices in my home lab and cloud resources on my Azure account. As I usually do with new devices on my network, I did some cursory security analysis of the product and it didn’t take long before I had identified what looked like a buffer overflow in response to an unauthenticated HTTP request. I quickly reported the issue to SonicWall’s PSIRT on September 18 and received a same day response that my report was a duplicate of another report they had received. When the advisory was ultimately published, I learned that the other report was one out of 11 from Nikita Abramov with Positive Technologies. In this post, I will discuss some aspects of the vulnerabilities I found, my interactions with SonicWall PSIRT, and some general thoughts about vulnerability handling and disclosure.

10 ways founders can manage their mental health while fundraising

Adonica Shaw is founder of Wingwomen, a health-and-wellness-focused social media platform for professional women.

Entrepreneurs’ mental health and stress management started to be more widely discussed amid the pandemic, but for many seasoned entrepreneurs, the topic is still taboo. Now that the world seems to be inching toward a new normal, some founders, investors and mental health experts find themselves asking whether we need to consider mental health moving forward if it wasn’t an issue before the pandemic.

Strategies, tools, and frameworks for building an effective threat intelligence team

How to think about building a threat intelligence program

The security community is continuously changing, growing, and learning from each other to better position the world against cyber threats. In the latest Voice of the Community blog series post, Microsoft Product Marketing Manager Natalia Godyla talks with Red Canary Director of Intelligence Katie Nickels, a certified instructor with the SANS Institute. In this blog, Katie shares strategies, tools, and frameworks for building an effective threat intelligence team.