Breaking News

Understanding AI: The good, bad and ugly

INDUSTRY INSIGHT

Understanding AI: The good, bad and ugly

Although it’s still in the early stage of adoption, the use of artificial intelligence in the public sector has vast potential. According to McKinsey & Company, AI can help to identify tax-evasion patterns, sort through infrastructure data to target bridge inspections, sift through health and social-service data to prioritize cases for child welfare and support or even predict the spread of infectious diseases.

Yet as the promises of AI grow increasingly obtainable, so do the risks associated with it.

Public-sector organizations, which house and protect sensitive data, must be even more alert and prepared for attacks than other businesses. Plus, as technology becomes more complex and integrated into users’ personal and professional lives, agencies can’t ignore the possibility of more sophisticated attacks, including those that leverage AI.

With that in mind, it’s important to understand new trends in AI, especially those that impact how agencies should be thinking about security.

Defining adversarial machine learning

“Simple” or “common” AI and machine learning developments have the potential to improve outcomes and reduce costs within government agencies, just as it does for other industries. AI and ML technology is already being incorporated into government operations, from customer service chatbots that help automate Department of Motor Vehicle transactions to computer vision and image recognition applications that can spot stress fractures in bridges to assist a human inspector. The technology itself will continue to mature and be implemented more widely, which means understanding of the technology (both the good and the bad) must evolve as well.

AI and ML statistical models rely on two main components to function properly and execute on their intended purposes: observability and data. When considering how to safeguard both the observability and data within the model, there are a few questions to answer: What information could adversaries obtain from the model to build their own model? How similar is the environment an agency is creating compared to others? Is the time-elapsed learning and feedback mechanism modeled and tested?

Models are built on assumptions, so if there are similar underlying assumptions across environments, an adversary has an increased opportunity of doing one of the following to the model:

  1. Poisoning the data that is used to feed AI — if adversaries modify the values used to make, improve or refine the AI, then they could alter results.
  2. Invalidating one of the underlying assumptions, such as obfuscating, hiding or redirecting the necessary, meaningful data the model learns from. If the training data is simply never found or known, the trust anchor of an environment can be circumvented.
  3. Building or supplanting an improved AI model to overpower it, akin to deep fakes.

Essentially, if agencies can teach AI to execute as their team does, an adversary can teach AI how to behave like an attacker as well, as demonstrated by user behavior analytics tools today.  Adversarial machine learning, then, is a learning technique that attempts to deceive, undermine or manipulate models by supplying false input into both observability and data.

As attackers become more refined and nuanced in their approach — from building adversarial machine learning models to model poisoning — they could completely disrupt all AI-related efforts within an organization.

Getting ahead, preparing for new risks

AI and ML are already helping streamline cybersecurity efforts, and this technology will, of course, play a role in preventing and detecting more sophisticated attacks as well, so long as they are trained to do so. As AI algorithms continue to learn and behaviors are normalized, agencies can better leverage models for authentication, vulnerability management, phishing, monitoring and augmenting personnel.

Today, AI is improving cybersecurity processes in two ways: It filters through the data quickly based on trained algorithms, which know exactly what to look for, and it helps identify and prioritize attacks and behavioral changes that require the attention of the security operations team, who will then verify the information and respond. As AI evolves, the actions and response will be handled by these algorithms/tools with lessening human interaction and increased velocity. For example, adversaries could successfully log in using an employee’s credentials, which may go unnoticed. If they are logging in for the first time from a new location or at a time when that user was not expected to be online, AI can help quickly recognize those anomalous behaviors and push an alert to the top of the security team’s queue or take more immediate action to disallow a behavior.

However, organizations, especially government bodies, must take their knowledge of AI a step further and prepare for the attacks of tomorrow by becoming aware of new, evolving complex risks. Data will must be viewed from both an offensive and defensive perspective, and teams must continuously monitor models and revise and retrain them to obtain deeper levels of “intelligence.” ML models, for example, must be trained to detect adversarial threats within the AI itself by conducting:

  • Detection – be alerted when a threat is sensed.
  • Reaction – take action to minimize impact of an actual threat.
  • Prevention – stop potential threats before they occur.

Most agencies are still in initial stages of incorporating AI/ML models into their operations. However, educating agency IT teams on these evolving threats, utilizing existing toolsets and planning and preparing for these attacks should start now. The amount of data being collected and synthesized is massive and will continue to grow exponentially. We must leverage all the tools in the AI tool chest to make sense of this data for the good.

About the Author


Seth Cutler is the chief information security officer at NetApp.