The world is up in the clouds (cloud computing) and the fourth industrial revolution is transforming our lives, society, and our work. While it is making things easy and accessible, it comes with its own perils like cyberattacks. The need to make the cybersecurity landscape stronger is more than ever as cybercriminals have become more clever. 2020 gave cyberattackers more opportunities to strike like email phishing scams. In terms of cyberattacks, we’ve reached a new low where phishers are using the COVID-19 vaccine rollout to trick people into paying for fake vaccines.
Scientists are working day and night to create innovative artificial intelligence and machine learning tools to eliminate evolving exploits. But the use of AI as a cybersecurity arsenal is being debated by experts. When it comes to determining what type of data is safe to send outside the company, humans do a much better job in making intricate decisions than machines. Relying on AI to make such decisions can lead to leaked data if the AI technology is not mature enough to fully understand the gravity of the situation. So how exactly does artificial intelligence fit into the cybersecurity picture, and where can it present challenges?
The one place artificial intelligence feels challenged when it comes to mitigating the risk from accidental insider breaches is spotting similarities between documents and knowing what files are okay to send to a specific person. For example, company invoices have the same template each time they are sent, with minor text differences that machine learning and artificial intelligence fail to distinguish. The technology will categorize all the invoices as the same despite differences in text and numbers, allowing a user to send the attachments, whatsoever. Whereas a human would know which invoice should be sent to a particular customer.
In a large organization, this kind of AI technology would only limit a small number of emails from being sent, and when it does find an error, it will intimate the administration and not the person sending the wrong email.
Data-Intensive Defence Strategy
When using AI technology, the entire setup will involve every email going to an external system (off-site) to be analyzed. This is especially the case with industries that deal with a lot of highly sensitive information and cannot afford to leak that data elsewhere. A machine learning technology would have to keep a part of this sensitive information to learn rules and accurate decisions from it. Given how machine learning works, it goes through a learning phase that can last for months, hence cannot provide instant security controls. For this reason, many companies are not comfortable with their sensitive data being sent elsewhere.
AI’s Role In Cybersecurity
In a business’ cybersecurity system, AI has a critical role to play. For instance, antivirus software operates on a ‘yes or no’ policy to determine if the file is malicious or not. AI can quickly find out whether it’s going to crash the system, take down the network, etc. So while AI might not be the best weapon of defence for preventing data leakage through email, it does have an important role to play in select areas like threat analysis and virus detection.
Share This Article
Do the sharing thingy
More info about author