Aaron Mulgrew by Aaron Mulgrew | | Blog

In the world of cyber security, Artificial Intelligence (AI) and Machine Learning (ML) are very much in vogue. But with the news last week that researchers demonstrated a universal bypass for one of the leading AI-based cyber security solutions, Deep Secure Lead Researcher Aaron Mulgrew cautions organisations against viewing AI as a cyber silver bullet.

Growing Popularity

In the context of cyber security, Artificial Intelligence (AI) attempts to defend a system by reasoning whether a pattern of behaviour is indicative of a threat using predictive logic. Machine Learning (ML) is the process by which AI learns what patterns lead to bad behaviour.

The use of AI as a defensive weapon in the war against cybercrime has grown massively over the last 12-18 months and shows no sign of slowing down any time soon. According to TEISS 63% of IT decision makers are planning to adopt AI as part of their cyber defence.

The core assumption behind the use of AI in cyber security is that the AI engine will learn from mistakes. So, someone gets hacked, the AI engine learns what went wrong and stops it next time and here the difficulties with whole heartedly embracing the AI approach begin.

For a start, machine learning models will never be perfect. There will always be imperfections in the dataset they’re learning from (some data might be clean code, and other malicious data won’t be included in the set). This means that if a new attack comes along – the AI defences may not detect it.

Attack Surface

The AI engine learns from data submitted to it. By its very nature, that data is contained in highly complex structures, which the AI engine must take apart to understand what’s inside, so its attack surface is potentially huge. The risks associated with this were thrown into sharp relief with the news last week that researchers had exploited a bug in one of the leading “pure AI” anti-malware products and created a universal bypass round the defence.

The researchers discovered that by simply appending a selected list of strings to a malicious file it was possible to dramatically change the score attributed to it by the AI engine, so avoiding detection. The approach proved 100% effective for the top 10 malware for May 2019 and 90% effective with a larger sample of 384 pieces of malware.

Part of a Bigger Picture

The researchers responsible for the exploit pointed out that they saw it as illustrative of the problem with relying purely on AI for your defence – rather than as a reflection on the efficacy of a particular vendor’s product. Of course, they are right. AI does improve overall security but defence in depth is required as no single cyber security product or vendor will solve all the vulnerabilities that exist in an enterprise.

What AI cannot effectively do is remove new, hitherto “undetected” threats. To deal with these needs a radically different approach where the processing power is not spent trying to learn about bad guys, but instead is spent completely re-creating incoming business content at the boundary so the threat can’t gain ingress.

Content Threat Removal (CTR) is a technology that can be deployed on the boundary to automatically transform all inbound documents and images and completely remove any threats they might contain. Nothing is trusted. Every inbound document is assumed to be “bad” and every document is transformed, ensuring none can contain the attackers original exploit material. This approach is simpler, far cheaper in processing terms and it ensures the threat is removed.  

Looking ahead, where AI will find its niche in the cyber security armoury is working in tandem with technologies like CTR. AI can deliver insight derived from working on copies of complex data offline in support of forensic investigation, while a technology like CTR provides the real-time flow of threat-free documents and images.


View all posts

Are you ready to talk to Deep Secure?