Aaron Mulgrew by Aaron Mulgrew | | Blog

bullets on laptop keyboard

In the context of cyber security, Artificial Intelligence (AI) attempts to defend a system by reasoning whether a pattern of behaviour is indicative of a threat using predictive logic.  An Machine Learning (ML) is the process by which AI learns what patterns lead to bad behaviour.

The use of AI as a defensive weapon in the war against cybercrime has grown massively over the last 12-18 months and shows no sign of slowing down any time soon. According to TEISS, 63% of IT decision makers are planning to adopt AI as part of their cyber defence.
The core assumption behind the use of AI in cyber security is that the AI engine will learn from mistakes. So, if someone gets hacked, the AI engine learns what went wrong and stops it the next time. However, it’s here the difficulties with wholeheartedly embracing an AI-based approach begin.

For a start, ML models will never be perfect. There will always be imperfections in the dataset they’re learning from. For example, some data might be clean code and other malicious data isn’t included in the set. This means that if a new attack comes along, an AI-based defence might not detect it.

Attack Surface

The AI engine learns from data submitted to it. By its very nature, that data is contained in highly complex structures, which the AI engine must take apart to understand what’s inside. This creates a potentially huge attack surface. The risks associated with this were thrown into sharp relief during 2019, with the news that researchers had exploited a bug in one of the leading “pure AI” anti-malware products and created a universal bypass around it.

Researchers discovered that by simply appending a selected list of strings to a malicious file it was possible to dramatically change the score attributed to it by the AI engine, and thus avoid detection. The approach proved 100% effective for the top 10 malware for May 2019 and 90% effective with a larger sample of 384 pieces of malware.

Part of a Bigger Picture

The researchers responsible for the exploit pointed out that they saw it as illustrative of the problem with relying purely on AI for your defence – rather than as a reflection on the efficacy of a particular vendor’s product. Of course, they are right. AI does improve overall security but defence in depth is required. No single vendor will solve all the vulnerabilities that exist in an enterprise.
What AI cannot effectively do is remove new, hitherto “undetected” threats. To deal with these needs a radically different approach where the processing power is not spent trying to learn about bad guys but instead is spent completely re-creating incoming business content at the boundary so the threat can’t gain access.

We call this approach zero-trust threat removal

Zero trust threat removal uses technology that can be deployed on the boundary to automatically transform all inbound or outbound documents and images and completely remove any threats they might contain. Nothing is trusted. Every document is assumed to be “bad” and every document is transformed, ensuring none can contain the attacker’s original malware. This approach is simpler, far cheaper in processing terms and ensures the threat is removed before it ever enters an organisation.

Looking ahead, where AI will find its niche in the cyber security armoury is working in tandem with technologies like threat removal.

AI can deliver insight derived from working on copies of complex data offline in support of forensic investigation, while a technology like threat removal provides the real-time flow of threat-free documents and images.

View all posts