Building a Multi-cloud Logging Strategy: Introduction

Posted under: Heavy Research

Logging and monitoring for cloud infrastructure has become the top question we are being asked. Even general conversations about moving applications to the cloud always seem to end up with clients asking how to ‘do’ logging and monitoring of cloud infrastructure. Logs are key for security and compliance functions, and moving into cloud services – where you do not actually control the infrastructure – logs are even more important for operations, risk and security teams. But these questions make sense to us because logging in and across cloud infrastructure is complicated, offers technical challenges as well huge potential cost overruns if implemented poorly.

The road to is littered with the charred remains of many who have attempted to create multi-cloud logging for their respective employers. Because cloud services are very different – structurally and operationally – than on-premise systems. The data is different; you do not necessarily have the same event sources, and the data is often different or incomplete, so existing reports and analytics may not work the same. As cloud services are ephemeral, you can’t count on a server ‘being there’ when you go looking for it, and IP addresses are unreliable as an identifier. Networks may appear to behave the same, but they are software defined, so you cannot tap into them in the same way as on-prem, nor make sense of the packets even if you could. How you detect and respond to attacks will be different, leveraging automation to be just as agile as your infrastructure is. Some logs capture every API call; while the granularity of information is great, the volume of information is significant. And finally, the skills gap of people who understand cloud is absent at many companies, so they ‘lift and shift’ what they do today into their cloud service, and are then forced to refactor the deployment in the future.

One aspect that has surprised all of us here at Securosis is the adoption of multi-cloud; we do not mean some Software-as-a-Service (SaaS) along with a single Infrastructure as a Service (IaaS) provider, rather firms are choosing multiple IaaS vendors and deploying different applications into each service. Sometimes this is a ‘best of breed’ approach, but far more often the selection of multiple vendors is fear of getting ‘locked-in’ with a single vendor. This makes the task of logging and monitoring even more difficult as collection across IaaS and on-premise all vary in capabilities, events and integration points.

Also complicating the matter is existing Security Information and Event Management (SIEM) vendors, as well as some of the security analytics vendors, are behind the cloud adoption curve. Some because their cloud deployment models are no different than what they offer for on-premise, making integration with cloud services awkward. Some because their solutions rely on traditional network approaches that don’t work with software defined networks. Still others employ a pricing model that, when hooked into cloud log sources – which tend to be very verbose – end up costing the customer a small fortune. We will demonstrate some of the pricing models involved later in the paper.

Here are the common questions asked:

  • What data or logs do I need? Server/network/container/app/API/storage/etc
  • How do I get them turned on? How do I move them off the source?
  • How do I get data back to my SIEM? Can my existing SIEM handle these logs, both in terms of different schema and in terms of volume and rate?
  • Should I use log aggregators and send everything back to my analytics platform? At what point during my transition to cloud does this change?
  • How do I capture packets and where do I put them?

These questions, and many others, are telling as they come from the perspective of trying to fit Cloud events into their existing/on-prem tools and processes. It’s not that they are wrong, but essentially they want to map the new data into the old and familiar systems. You need to rethink your approach to logging and monitoring.

The questions these firms should be asking:

  • What should my logging architecture look like now and how should it change?
  • How do I handle multiple accounts across multiple cloud providers?
  • What cloud native sources should I leverage?
  • How do I keep my costs manageable? Storage is incredibly cheap and plentiful in the cloud but what is the pricing model for various services that ingest and analyze the data I’m sending them?
  • What should I send to my existing data analytics tools? My SIEM?
  • How do I adjust what I monitor for cloud security?
  • Batch or real-time streams? Or both?
  • How do I adjust analytics for cloud?

In all you need to take a fresh look at logging and monitoring, and adapt both IT and Security workflows to fit with cloud services, especially if you’re transitioning to cloud from an on-prem environment and will be running a hybrid environment during the transition, which may be several years from initial project kick-off.

Today we are launching a new series on Building a Multi-cloud Logging Strategy. Over the next few weeks, Gal Shpantzer and I (Adrian Lane) will dig into the following topics and discuss what we are seeing when helping firms migrate to to cloud. And there is a lot to cover.

Our tentative outline is as follows:

  1. Barriers to Success : In this post we will discuss some of the reasons why traditional approaches do not work, and potential areas where you will lack visibility.
  2. Cloud Logging Architectures : Here we discuss the anti-patterns and also the more productive approaches relating to logging. We will make recommendations on reference architectures to help with multi-cloud as well as centralized management.
  3. Native Logging Features : We discuss what sort of logs you can expect to receive from the various types of cloud services, what you may not receive in a shared responsibility service, and then the different data sources firms have come to expect and how to get them. We will also provide some practical notes on logging in GCP, Azure, AWS (major public IaaS providers) to help you navigate the native offerings, as well as some PaaS/SaaS vendors.
  4. BYO-Logging : Where and how you will fill gaps by bringing other 3rd party tools to the cloud, or building them into the applications and service you deploy into the cloud.
  5. Cloud or On-premise management? Here we discuss some of the tradeoffs between moving the log management function to the cloud, keeping these activities on-premise, or use of a hybrid model. This includes the risks and benefits of connecting cloud workloads to the on-prem networks.
  6. Security Analytics : As more and more firms augment or replace traditional SIEM with security analytics, data lakes and ML/AI, we will discuss some of these approaches and use of various data sources for threat detection, compliance, and governance. This is the end goal of 1-5 above: Facilitating the ability to collect, transform and store in order to gain real-time and historical insights.

As always, questions, comments, disagreements and other constructive input is encouraged.

– Adrian Lane
(0) Comments
Subscribe to our daily email digest

*** This is a Security Bloggers Network syndicated blog from Securosis Blog authored by info@securosis.com (Securosis). Read the original post at: http://securosis.com/blog/building-a-multi-cloud-logging-strategy-introduction