For decades we have used heuristic methods for analysing data, looking for pre-programed patterns through Boolean based logic: AND, OR and IF. This logic has been critical in automating simple and repetitive tasks usually prone to human error. However, this programmatic approach cannot meet the defence requirements for the current cyber threat.
Microsoft Defender Antivirus, for example, processes around 90 billion potentially malicious encounters per day, most of which are totally unique to the device in question and have never been documented before. This uniqueness is a result of the rise of polymorphic malware, malware that can independently change its own source code to create a new ‘digital signature’ while maintaining it’s core functions. Microsoft research shows that the use of polymorphic malware is so extensive, 96% of malware is only seen once. By leveraging AI models, anti-virus solutions can not only detect more complex variations in digital signatures of polymorphic malware but can examine similarities in the source code through Deep Neural Networks that confirm it to be malicious. Such analysis has previously been completed by cyber security experts and is so time-consuming that the threat often moves faster than the solution.
The DARPA grand hacking challenge is another example of AI nudging further forward in the race against malicious cyber actors. The winning ‘Mayhem’ program from the Las Vegas-based competition utilises AI to uncover, patch and fix security vulnerabilities in networks at breakneck speed. Adversarial machine learning software to both defend and attack networks are being developed and integrated into cyber security products as a result of this ground breaking work.
However, there is more work still to be done; in cyber security understanding and communicating the process by which a decision is made is a key part of informing the defence posture going forward. In many cases the complexity of AI makes comprehending the decision it has reached very difficult which leaves the role of human decision making within the AI process unclear. If we agree the decision must be understood at least on some level, then there must be a role for the human cyber analyst. Deciding upon this role is a question currently without an obvious answer.
Furthermore, just as polymorphic malware was created to bypass heuristic detection methods, attempts are already being made to thwart and even weaponize AI tools. Not only could malicious AI be generated to assist or even perform complex cyber-attacks but vulnerabilities in the decision-making process of AI could be used to manipulate the actions it takes. In time AI could become as vulnerable to attack as any other piece of software on our networks and it is imperative that research into the secure design of such algorithms continues.
AI undoubtably is already playing a role in both cyber defences and offences and this will only grow to meet our evolving and complex data processing requirements. While there is both power for good and bad within AI, it is no different to any other technology. However, without AI, we cannot meet the cyber threat we currently face and without continual progression the AI arms race between cyber attacker and defenders will not be an even fight.
PENETRATION TESTER, CREST CRT
Alex has a background in mathematics which lends itself to the analytical and critical thinking skills required in penetration testing. As a CREST Registered Tester, Alex has experience delivering a wide range of penetration tests.