There are many reasons why enterprises are using cloud platforms such as Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP) to facilitate web applications. Outsourcing to the cloud adds scalability, efficiency, and reliability, while also reducing workloads for IT teams.
These are positives, for sure – but while enterprise IT leaders celebrate the benefits that the cloud brings to their businesses, they may be missing a big negative. Organizations that move critical infrastructure to cloud platforms often mistakenly assume that their cloud providers also lock down security. In fact, this is often not the case; and when enterprises let critical cloud security slip, their security teams may also fail to configure critical controls or adopt the necessary secure architecture practices, leaving gaps that attackers can compromise.
For example, many of the risks that security teams have to deal with when they’re working to keep attackers from breaching on-premises architecture – like improper segmentation, overly permissive firewall rules, or weak passwords – also exist in the cloud. And there are always new risks that can affect cloud platform security, such as exposure of API keys in source repositories or open web directories. None of these risks should be left unmonitored – yet the default configurations for AWS, Azure, and GCP often don’t include turning on event logging, encryption, data retention, multifactor authentication, and other preventative controls.
The steps below detail how to configure and monitor your cloud platforms for improved visibility, which cloud-native tools are needed to secure cloud platforms, and why integration can help secure a multi-cloud environment.
Step 1: Determine where sensitive data lives, and prioritize integrations that increase visibility
Because cloud deployments are relatively easy, it’s also easy to move data around from cloud to cloud. For this reason, security teams need to understand where data is stored and how it’s used. Without this knowledge, and without controls that manage visibility into sensitive data, it could be painfully easy to transfer customer data from a private server to a public storage repository.
Typically, the flow of data in the cloud should be traced from the point where the application is accessed, and back to where a company’s developers eventually access the systems on which data is stored. Security teams need knowledge of how data moves through the environment – if not, they’ll waste time and money securing potentially lower-priority infrastructure devices.
Fortunately, the major cloud platforms have out-of-the-box security tools and open APIs that make log ingestion easy. Once security teams know what they’re looking for, they can use proper tuning and integrations to make alerting and visibility into attacks as simple to managing on-premises tools.
Step 2: Configure cloud platforms to maximize the security of their architecture
It’s worth spending time to figure out which features a specific cloud platform already provides for visibility and automation. Once these basic tools are enabled, security experts can start the process of fine-tuning and tightening controls. For example, teams can set up alerts for unusual calls from accounts, repeated denies, policy changes, and other actions and content that help pinpoint attacker activity. For more information on platform-specific features on AWS, Azure, and GCP, check out the Tactical Guide to Securing Data on Cloud Platforms.
At this stage, review the areas of vulnerability that can allow bad actors to pivot their attacks, including Identity Access Management (IAM), cloud infrastructure, server infrastructure, and application security. Application security and IAM should be the primary areas of concern. IAM can be secured through the combination of access logs and regular auditing and tightening of permissions; application security requires security by design, via developers who care about the security of their applications.
Step 3: Monitor the cloud through integration
Most cloud environments generate a large number of logs that can quickly take up all of a security team’s time. To constantly parse, tune, and respond to alerts, teams need automation and integration. To efficiently and effectively respond to alerts, teams need a centralized view of their data across cloud and on-premises environments, such as through an Open XDR solution. And, to see what’s happening in the cloud environment and export logs into an alerting engine such as a SIEM, enterprise organizations need to plan a cloud-logging strategy by answering these questions:
- Where will log aggregation and filtering tools reside? It’s efficient to have collection tools doing the filtering and aggregating as close to the source as possible.
- How big are the Internet connections between your cloud environment and local data centers? It may be more cost-effective to keep raw data at both places, and send actionable events used in alerting rules to the central SIEM or monitoring tool.
- How will you collect and parse cloud infrastructure logs? In addition to standard operating system or application logs from servers, many essential cloud infrastructure logs should be gathered and monitored for unauthorized or malicious activities.
For detailed instructions on securing data on Google Cloud, Azure, and AWS following the steps above, check out The Tactical Guide to Securing Data on Cloud Platforms in 2021.
Joe Partlow, ReliaQuest CTO, currently oversees all new research and development efforts and new product initiatives. He has been involved with Infosec in some capacity or role for over 20 years, mostly on the defensive side but always impressed by offensive tactics. Current projects and interests include data analytics at scale, forensics, threat, security metrics and automation, red/purple teaming, and artificial intelligence. Outside of Information Security, he has been involved in many other areas of the business including Web Development, Business Intelligence, Database Administration, Project Management, IT, and Operations. He has experience in many different business verticals including retail, healthcare, financial, state/local government, and the Department of Defense. He is also a regular speaker and contributor at security conferences, groups, and associations.