There’s no question that we’re poised at the dawn of a very exciting time as it relates to the application of machine learning within the context of IT security. That said, without enabling end users to focus these capabilities on concrete use cases, the overall impact of this revolution may be compromised.
Whether or not that concession represents more an issue of perception, versus an impact against the underlying value proposition of this technology is open to debate. However, with machine learning – and, of course, AI – currently infecting the marketing language of so many technology pundits and providers, we’re already awash in the hype cycle.
The truth is that machine learning will ultimately help practitioners either way, at least I believe so; but we also have an important opportunity to ensure the horse doesn’t get too far out of the barn in terms of inflated or misplaced expectations.
Separating the machine learning ‘wheat from the chaff’
A recent conversation with an industry analyst regarding some research he’s undertaken around artificial intelligence and machine learning drove this point home in spades. Experts such as this to whom practitioners turn to differentiate the wheat from the chaff [polite version] are already getting lots of call from security leaders to help understand what messages they should take seriously or take a pass on. Loosely, the question seems to be: “Where do I really need this stuff and how will it help?” Even worse, some have bought into the hype and are looking for the AI that will allow them to offload some human decision making. They are looking for “AI for the SOC.”
For his part, the expert noted that even he is becoming challenged to understand which providers actually deliver fundamentally applicable AI or machine learning technology, compared to those who might be slapping a fresh coat of paint on existing capabilities just to get some skin in the game.
This led us to a state of wild agreement over the notion that common sense must prevail, and that everything valuable about machine learning in the near term must be tied directly to hard and fast workflows. And such use cases clearly do already exist, as noted in previous CSO articles on the topic appearing long before this conversation ever took shape.
Case in point: machine learning and user behavior analysis
For example, throwing machine learning capabilities at the issue of, say, user behavior analysis [disclaimer: this is uncoincidentally what my company is focused on] can’t just be about adding another layer of discreet technology to a broad existing practice such as “preventing insider threats”. There’s certainly a broad value that can be appreciated therein, but the use case should be more targeted. It is not just about the technology – use cases and outcomes are key.
This can be accomplished by narrowing the aperture to a particular area of workflow, such as integrating user analytics, backed by unsupervised and supervised machine learning, to prioritization of DLP incidents, or endpoint incidents, or CASB alerts, etc. By creating a tangible process that incorporates optimization and amplification of an existing value stream, such as prioritizing which existing incidents actually represent risks versus acceptable behaviors, machine learning isn’t serving as some unseen esoteric superset of capabilities, it’s being applied in ways that impact existing processes.
Leveraging unsupervised machine learning in this fashion allows for ingestion of massive data sets of existing security incident information to rapidly get a fix on what normal and abnormal behaviors look like. This is critical context that offers current and historical comparative analysis, and creation of baselines to inform future incident handling, and even policy tuning. It can also help to focus the analyst’s attention on the high impact items.
From the unsupervised side, monitoring the actions of human analysts handling these tasks on an ongoing basis, tracking which incidents they rank as truly problematic, or assigning for bulk remediation and/or policy review because they represent user mistakes or inefficient policies, represents another example where machine learning can be directly applied in a very useful manner.
Going forward, we can hope that when supplied with critical data streams such as threat intelligence and a wider gamut of data provided by existing security infrastructure, security architecture backed by AI and machine leaning, and even tied together with security automation and orchestration, will offer some futuristic measure of innately understanding and rapidly responding capabilities will be achieved. That time, in fact, may not be too far off.
But for today, and for the betterment of all of us working in the field either as practitioners, providers, and even subject matter experts, the degree to which we keep the conversation trained on practical use cases will help us foster success, instead of inevitable disappointment.
This article is published as part of the IDG Contributor Network. Want to Join?