Is Malware Heading Towards a WarGames-style AI vs AI Scenario?

Adam Kujawa, Director of Malwarebytes Labs, has been contemplating the evolution of malware attack and defense, attempting to work out strategies to stay ahead of cybercriminals in what has always been a technological game of leapfrog.

While malware has continued its trajectory of increasing stealth and persistence, defenders currently have the edge with their introduction of artificial intelligence (AI) and machine learning (ML). For now, these systems don’t do anything more than can be done by human analysts, but they do it at machine speed, don’t miss any red flags, and return predictions on maliciousness faster than could be achieved by a human team.

Meanwhile, the criminals have adapted their methodology by seeking to ‘fly under the radar’ of defense systems, and to add persistence to their infiltrations. The first objective is largely achieved by adopting fileless malware — and it is particularly effective against any malware signature-based defenses. It is also effective against any ML-based system that hasn’t yet learned or been taught to recognize the behavior of such attacks.

This is the primary ‘weakness’ of ML systems — they cannot detect what they don’t know how to detect; but they will detect more, faster when they understand the attack process.

The second ‘advance’ by cybercriminals has been to improve their persistence on a compromised network. Kujawa defines persistence as increasing the time to detection, and maintaining a presence on the compromised device to later regrow the malware after detection. 

One method used by attackers for both avoiding detection and maintaining persistence is for the malware to seek and disable resident anti-malware software. “With this dual focus,” he writes, “a new class of malware has risen to prominence: under-the-radar malware.” (PDF) This is a problem now. It will remain a problem for some time to come, and is particularly potent with manually delivered attacks.

Kujawa cites SamSam as an example. “The reason this malware is difficult to remove,” he comments, “is because before it is launched, attackers are able to manually disable security software. This is done after attackers gain administrative control of the system, most likely through an RDP exploit on the system.”

At the end of last month, the DOJ charged two Iranians over their alleged role in developing and using SamSam. But despite the gang being just a two-man operation, their methodology of manual intrusion, stealth, persistence and reconnaissance while disarming defenses, followed by ransom extortion has proven remarkably successful. 

In July, Sophos estimated that the pair had already attacked more than 230 targets, and were currently receiving around $300,000 per month in ransom payments. One example of the steps taken to stay under the radar of detection is the order of file encryption: SamSam starts with smaller files (less than 100Mb) before progressing to larger files that take longer to encrypt. “This carefully curated approach,” said Sophos, “enables the attacker to achieve a greater volume of encrypted files before the attack is spotted and interrupted.”

While ML systems are able to detect the presence of SamSam (and many other current advanced malware) Kujawa is concerned that in the continuing game of leapfrog, attackers are already beginning to use their own machine learning techniques against current and future targets.

Existing advanced malware, he writes, is “difficult to remove, detect and/or stop. However,” he warns, “they are just the beginning of the next phase of malware development, where technologies like AI, worms and fileless malware are all going to be commonplace in the threat landscape.”

The danger is not from malicious AI being installed on victim devices, but from malware communicating with the attackers’ remote AI systems. Having said that, in August 2018 IBM described a malware project called DeepLocker that uses its on-board AI to recognize an intended victim and decide whether to remain passive and invisible, or to detonate.

Kujawa does not believe technology such as DeepLocker is an immediate threat. “DeepLocker is far beyond what I think the threats we see today can do and I don’t expect that kind of tech to be a common thing we see at least for the next ten years,” he told SecurityWeek.

However, he continued, “I can absolutely see this technology used to create harder to detect malware or even embed the malware in legitimate apps and use advanced fingerprinting functionality to keep the malware hidden for longer.”

Advanced fingerprinting techniques is what he sees as the more immediate threat from criminals’ use of AI — one that is already becoming evident. “Fingerprinting is going to be a huge part of AI’s job if we see anything similar to what the DeepLocker guys are doing, in the wild. One of the best ways to stay undetected,” he explained, “is to identify who you want to infect, what kind of system they are running and overall, will they be good targets?”

We have already seen cybercriminals pay for ad space based on a specific user profile that was more likely to pay up the ransom if they were infected, so the bad guys were able to use Real Time Bidding (RTB, a common advertising process for sending the right ad to the right person). “But we also saw,” added Kujawa, “plenty of code being utilized in exploit kits that identified specific applications or information about the user, which made their ability to target specific users even greater. At the end of the day, the threat that spreads itself all over the place is going to be identified first. So, like SamSam, DeepLocker-inspired malware is going to be difficult to catch or collect samples of, at which point it won’t really matter because the AI would have morphed the malware or deployed new versions that haven’t been caught yet.”

As for protection against these threats, he suggests, “the best thing to do is focus on behavioral detection and the use of AI on our end to combat malware that is designed by another AI to evade detection. At some point, there may be little interaction from humans while we watch our AI battle others on the internet.” The danger with getting better at defense is that it encourages attackers to get better at attack.

Related: The Malicious Use of Artificial Intelligence in Cybersecurity 

Related: The Current Limitations and Future Potential of AI in Cybersecurity 

Related: Top Experts Warn Against ‘Malicious Use’ of AI 

Related: The Challenge of Training AI to Detect Unique Threats

view counter

Kevin Townsend is a Senior Contributor at SecurityWeek. He has been writing about high tech issues since before the birth of Microsoft. For the last 15 years he has specialized in information security; and has had many thousands of articles published in dozens of different magazines – from The Times and the Financial Times to current and long-gone computer magazines.

Previous Columns by Kevin Townsend:

Tags: