Breaking News

Cybersecurity can be made agile with zero-shot AI

Artificial intelligence (AI) is being looked upon as the next frontier to make cyber-defence robust and scalable. The key aspects that make AI and particularly machine learning (ML) attractive in cybersecurity is the ability to learn from large volumes of telemetry data and find patterns of abnormal behaviour. ML algorithms can be used to find anomalies in different parts of the enterprise like application logs, network flows, user activities and authentication logs. As enterprises adopt models like zero-trust with strict identity verification and explicit permissions, augmenting these with ML algorithms to monitor user behaviour patterns becomes critical.

Modern security information and event management and intrusion detection systems leverage ML to correlate network features, identify patterns in data and highlight anomalies corresponding to attacks. Security researchers spend many hours understanding these attacks and trying to classify them into known kinds like port sweep, password guess, teardrop, etc. However, due to the constantly changing attack landscape and the emergence of advanced persistent threats (APTs), hackers are continuously finding new ways to attack systems.

A static list of classification of attacks will not be able to adapt to new and novel tactics adopted by adversaries. Also, due to the constant flow of alarms generated by multiple sources in the network, it becomes difficult to distinguish and prioritize particular types of attacks—the classic alarm flooding problem.

A possible solution would be if we had a smart system that could auto-label alarms and categorize them so that the analyst can focus on particular alarm types. We propose a dynamic classification system using a zero-shot classification approach to ML.

What exactly is this? The traditional approach to applying ML is supervised, where labelled data points are used to train models to make predictions. For example, a classifier model may process a record logged by a network monitor and classify it as an attack. While this is useful, these models can only learn from previously known attacks; so, a human would need to annotate the network flow for the attack data and feed it to build the model. The other approach becoming popular is unsupervised, where models learn to observe “normal” behaviour and flag any anomalies. This approach can highlight unknown attack patterns but only provide anomaly information to the security analyst.

One approach to tackle this is an upcoming research area in AI/ML called Explainable AI (XAI). Here, the models are either redesigned or enhanced to provide an explanation along with the prediction. So, when the model predicts an anomaly, it will also mention which feature values made it make that decision.

XAI and zero-shot learning can be applied to different areas of a cybersecurity ecosystem. Let’s take an example of an ML model that monitors network traffic in an office network. Say, it flags a data transmission above 100MB happening from a network computer to a Google drive account as an anomaly—different from normal network flows. If we show the security operation centre analyst additional parameters that made us flag this as anomaly, like size of data files and destination domain, this information can save the analyst valuable time in classifying this as a data exfiltration attack. The system can further take feedback from the analyst and start auto-labelling new such attacks as data exfiltration. Extrapolate this to a network with thousands of nodes and users, explainability and zero-shot learning can save hours of valuable time spent by analysts in searching for the needle in the haystack.

In today’s world, enterprises face APTs, which are well-funded attackers that focus on a high-value target and can stay undetected inside networks for days. Behavioural analytics becomes key to understand patterns and identify tactics, techniques and procedures used by attackers. Zero-shot learning models that decipher tactics like reconnaissance, privilege escalation and exfiltration can be extremely valuable to prevent major damage. Over time, this active learning system can learn from feedback.

This technology has huge potential to improve an organization’s cybersecurity risk posture. However, we should remember that no AI is foolproof.

Dattaraj Rao is chief data scientist at Persistent Systems.

Subscribe to Mint Newsletters

* Enter a valid email

* Thank you for subscribing to our newsletter.

Never miss a story! Stay connected and informed with Mint.
Download
our App Now!!