Breaking News

Why Adversarial Machine Learning Is the Next Big Threat to National Security – ThomasNet News

Welcome to Thomas Insights — every day, we publish the latest news and analysis to keep our readers up to date on what’s happening in industry. Sign up here to get the day’s top stories delivered straight to your inbox.

The Joint Artificial Intelligence Center (JAIC), a division of the United States Department of Defense (DoD) tasked with accelerating the adoption of artificial intelligence (AI) across the branches of the military, has stated that AI will eventually impact every mission carried out by the DoD.
AI is set to influence all military missions as well as every industry.
In particular, adversarial machine learning (AML), an emerging AI practice that involves independent and state-sponsored actors manipulating machine learning algorithms to cause model malfunctions, could have catastrophic consequences.
The Expansive Role of Artificial Intelligence
“For the Department of Defense, AI will impact all our missions, and to a larger extent, change the character of the future battlefield,” the JAIC said in an official blog post in May. That same month, the JAIC awarded an $800 million contract to Booz Allen Hamilton to support the JAIC’s joint warfighting mission initiative, the goal of which is to deliver advanced AI technology directly to the front lines.
The DoD itself, whose unclassified investments in AI have increased from just over $600 million in FY2016 to $2.5 billion in FY2021, has also emphasized the considerable influence that AI will have on the country’s national security initiatives going forward.
“AI is poised to transform every industry, and is expected to impact every corner of the Department, spanning operations, training, sustainment, force protection, recruiting, healthcare, and many others,” the Department said in its 2018 Department of Defense Artificial Intelligence Strategy.
It continued: “Other nations, particularly China and Russia, are making significant investments in AI for military purposes, including in applications that raise questions regarding international norms and human rights. These investments threaten to erode our technological and operational advantages and destabilize the free and open international order. The United States, together with its allies and partners, must adopt AI to maintain its strategic position, prevail on future battlefields, and safeguard this order.”
The DoD’s applications for AI technology are diverse in both purpose and scope, ranging from intelligence, surveillance, and reconnaissance initiatives to command and control operations, cyberspace operations, logistics, autonomous and semiautonomous vehicles, and lethal autonomous weapon systems (LAWS).
Most of these applications involve machine learning, a subset of AI concerned with teaching computer systems how to autonomously acquire knowledge and improve from experience and, in turn, make accurate predictions or determinations about phenomena in their environment.
What Is Artificial Intelligence?
Artificial intelligence is a discipline of computer science concerned with creating advanced algorithms and statistical models that mimic the reasoning, problem-solving, and predictive abilities of humans. It may also refer to the actual computers or machinery utilizing these processes.
AI seeks to explore not only how human learning, problem-solving, and estimation can be automated to make accurate predictions and decisions about phenomena, but also how computers can improve and accelerate tasks that would be difficult, impractical, or incredibly time-consuming for humans to accomplish through the use of conventional programming methodology.
The concept of AI dates back to a conference at Dartmouth College in the 1950s, entitled the “Dartmouth Summer Research Project on Artificial Intelligence.” It was at this conference that the term “artificial intelligence” was coined. The field experienced dry spells in both interest and funding from the 1970s until about the late 1990s, when AI as we know it today was revived.
What Is Machine Learning?
Machine learning (ML) is a subset of AI concerned with teaching computer systems how to autonomously reason, problem-solve, and make predictions and determinations. This is mainly accomplished by:

Feeding labeled training data to machine learning algorithms, a process known as supervised learning
Feeding unlabeled training data to machine learning algorithms, a process known as unsupervised learning
Providing machine learning algorithms with trial-and-error feedback based on how they interact with their environment, a process known as reinforcement learning

ML and AI should not be conflated. ML is just a method used to establish an artificially intelligent computer system. There are forms of AI that do not utilize ML, but ML is the most common method employed to achieve AI. There’s also deep learning (DL), a subset of ML that makes use of neural networks.
An example of ML, specifically supervised ML, is the DoD’s use of a computer vision algorithm to identify people and objects of interest in surveillance footage. The computer is fed a labeled or tagged dataset, e.g., images of high-profile individuals or armored security vehicles that contain notations of certain physical characteristics. The goal is for the computer’s machine learning algorithm to learn these patterns, establish profiles, and be able to accurately identify these individuals or objects once it begins sifting through the footage.
What Is the Role of AI and ML in National Security?
The DoD is becoming increasingly concerned with how it can utilize AI and machine learning to improve and streamline military operations and other national security initiatives, as evidenced by its establishment of the JAIC and the billions of dollars it’s investing in AI system development.
Regarding intelligence collection, AI technologies have already been incorporated into military operations in both Iraq and Syria, where, as previously mentioned, computer vision algorithms are being used to detect people and objects of interest.
Military logistics is another area of focus for the DoD. The Air Force is using AI to keep track of when its planes are in need of maintenance, and the Army is using IBM’s AI software “Watson” for both predictive maintenance and analysis of shipping requests.
Defense applications of AI also extend to semiautonomous and autonomous vehicles, including fighter jets, drones or unmanned aerial vehicles (UAVs), ground vehicles, and ships owned and operated by the Navy.
For example, the Army is fielding robotic combat vehicles (RCVs) that utilize AI to remove improvised explosive devices (IEDs) and perform navigational and surveillance-related activities, while the Air Force is experimenting with pairing manned aircraft with unmanned aircraft, the latter of which would, in real time, automatically complete tasks like signal jamming for fighter pilots mid-flight.
The Navy, on the other hand, is utilizing AI on its unmanned, anti-submarine ship Sea Hunter, which not only autonomously navigates the open waters but actively coordinates missions with other unmanned sea vessels. The Navy is also using AI to power underwater swarm drones, which are designed to overwhelm enemy defense systems and coordinate with each other to complete tasks, as well as to power autonomous boats that could, if necessary, defend American harbors from invaders and hunt enemy submarines.
Clearly, how the U.S. defends its borders is undergoing a radical technological change via the implementation of AI and ML technologies, one that enhances mission and battlefield operations, expedites various defense activities both domestically and abroad, and arguably, supports the public good.
But what happens when these novel machine learning algorithms become susceptible to manipulation by adversaries? What are the resulting consequences for national security?
There’s a term for this. It’s known as adversarial machine learning (AML).
It’s no secret that AI consistently gets a bad rap, particularly by those who say it will one day replace the need for human intellectual activity and even develop a mind of its own. This is, of course, a gross misconception at best and conspiratorial fearmongering at worst.
If there’s any aspect of AI that people should be afraid of, it’s AML, not only because it’s perpetrated by actors who seek to compromise national security and jeopardize human life, but because, unlike a science-fiction movie, it’s a real, observed, and impending threat, the consequences of which are only beginning to be realized and understood.
What Is Adversarial Machine Learning (AML)?
Adversarial machine learning (AML) is a technique used to dupe ML models into producing false or inaccurate outputs.
AML is accomplished using one of these tactics: by inserting altered or manipulated inputs into the ML model’s dataset during or after training, referred to by cybersecurity researchers as “poisoning” and “evasion,” respectively, or by making physical, real-world alterations to objects that an AI system is expected to detect and respond to once deployed.
These tactics, as one can imagine, can have serious consequences for both national security and human life.
How Does AML Threaten National Security and Human Life?
In a 2018 report by the Office of the Director of National Intelligence (DNI), several AML scenarios based on available research were presented. Perhaps the most pressing of these scenarios is AML’s potential to compromise computer vision algorithms, which is one of the most widely used AI defense applications.
For example, researchers have demonstrated that, by strategically placing stickers on a stop sign, a vehicle’s object detection system will consistently misidentify it as a speed limit sign. So, in theory, if a vehicle is trained to automatically respond to phenomena in its environment, this could cause it to plow through the stop sign, putting its driver, passengers, and any pedestrians or bystanders in harm’s way.
One’s imagination can run wild with the dangers that such misidentification could pose. For example, what if a lethal autonomous weapons system (LAWS) misidentifies friendly combat vehicles as enemy combat vehicles? What if an explosive device, an enemy fighter jet, and a group of turrets are misidentified as a rock, a bird, or a satellite dish? These frightening scenarios are seemingly endless, and unfortunately, or fortunately, depending on your point of view, they’re supported by the literature.
MIT researchers tricked an image classifier into thinking that machine guns were a helicopter. Should a weapons system equipped with computer vision ever be trained to respond to certain machine guns with neutralization, this misidentification could cause unwanted passivity, creating a potentially life-threatening vulnerability in the computer’s machine learning algorithm. You could also reverse the scenario, in which the computer misidentifies a helicopter as machine guns.
ML data poisoning was also demonstrated in a study, in which researchers added small changes, called “perturbations,” to an image of a panda. These changes caused the ML algorithm to identify the panda, a giant bear of the Ursidae family, as a gibbon, a small ape of the Hylobatidae family.
Another study found that attackers can actually dupe facial recognition cameras with infrared light, which allows them to not only circumvent accurate recognition but also impersonate other people.
Evasion attacks include email spam filter manipulation. For example, if attackers know that an AI spam filter tracks certain words, phrases, and word counts for exclusion, they can manipulate the algorithm by using acceptable words, phrases, and word counts, and thus gain access to a recipient’s inbox, further increasing the likelihood of email-based cyberattacks.
The imagination can truly run wild with the dangerous outcomes resulting from effective execution of AML. As with all cybersecurity threats, the most important ingredient in the recipe for secure machine learning algorithms is a cocktail of training and prevention.
Solutions
Thankfully, cybersecurity experts are already devising a variety of new and interesting ways to protect against AML.
One such method is adversarial training, which involves feeding a machine learning algorithm potential perturbations. In the case of computer vision algorithms, this would include images of the stop sign that include those strategically placed stickers, or of pandas that include those slight image alterations. That way, the algorithm can still correctly identify phenomena in its environment despite an attacker’s manipulations. This method has its downsides, of course, because not all AML attacks can be anticipated.
Other methods include pre-processing and denoising, which involve automatically removing any adversarial noise from inputs, as well as adversarial example detection, which employs nonintrusive image quality features to distinguish between legitimate and adversarial inputs. These techniques can ensure that AML inputs and alterations are neutralized before they ever reach the algorithm for classification.
Given that ML and AML are still relatively new phenomena, the research on both is still emerging. Expect newer and state-of-the-art countermeasures in the coming years.
Staying Informed
The billions of dollars invested in AI, the reality that other nations are using AI technologies for reasons that DoD considers nefarious, and AI’s ability to automate and expedite traditionally mundane, time-consuming, and labor-intensive tasks, indicates that the Pentagon’s interest in this exciting technology isn’t expected to wane anytime soon.
Unfortunately, this also means that threats to AI-enabled national security efforts will inevitably grow in both numbers and scale. Thankfully, the U.S. and its allies seem to be leading the charge on adequately studying and responding to such threats.
Going forward, U.S. businesses and industries can stay abreast of the DoD’s latest AI and machine learning developments by following the JAIC or by subscribing to updates from the Thomas Insights Update.
 

Image Credit: Phonlamai Photo / Shutterstock.com

Are Your Customers Actually Happy? It’s Probably Time to Improve Your Industrial B2B Customer Experience [New Podcast]Next Story »

More from Industry Trends

Source: https://www.thomasnet.com/insights/why-adversarial-machine-learning-is-the-next-big-threat-to-national-security//