Breaking News

US Needs to Defend Its Artificial Intelligence Better, Says Pentagon No. 2 – Defense One

As the Pentagon rapidly builds and adopts new artificial intelligence solutions for its technologies, Deputy Defense Secretary Kathleen Hicks said military leaders increasingly are worried about a second-hand problem: AI safety.
AI safety broadly refers to making sure that artificial intelligence programs don’t wind up causing significant problems, whether because they were based on corrupted or incomplete data, were poorly designed, or were hacked by malicious attackers. 
As businesses have rushed to build, sell, and adopt machine learning solutions the issue of AI safety is often seen as an afterthought. But the Department of Defense is obligated to put a little more attention into the issue, said Hicks on Monday, in an appearance at the Defense One Technology Summit, held virtually. 
“As you look at testing evaluation and validation and verification approaches, these are areas where we know—whether you’re in the commercial sector, the government sector, and certainly if you look abroad—there is not a lot happening in terms of safety,” she said. “Here I think the department can be a leader. We’ve been a leader on the [adoption of AI ethical] principles, and I think we can continue to lead on AI by demonstrating that we have an approach that’s worked for us.”
While multiple private companies have adopted AI ethics principles, the principles adopted by the Defense Department in 2020 were considerably more strict and detailed. 
While Ai safety has yet to cause big headlines, the wide implementation of new machine learning programs and processes presents a rich attack surface for adversaries, according to Neil Serebryany, founder & CEO of AI safety company CalypsoAI.The company scans academic research papers, the dark web, and other sources to find new potential threats to deployed AI programs. It counts the Air Force and Department of Homeland Security among its clients.
“Over the last five years, we’ve seen a more than 5,000 percent rise in the number of new attacks discovered and new ways to break systems,” said Serebryany. Many of those attacks focus on the big data sources that feed AI algorithms. It’s “very hard for a data practitioner to know if they have been breached or have not been breached.”
A report out this month from Georgetown’s Center for Security and Emerging Technology notes “Right now, it is hard to verify that the well of machine learning is free from malicious interference. In fact, there are good reasons to be worried. Attackers can poison the well’s three main resources—machine learning tools, pretrained machine learning models, and datasets for training—in ways that are extremely difficult to detect.”
The department is grappling with AI safety at a time when it is rushing to adopt AI in new ways. Within the next three months, the military will dispatch several teams across its combatant commands to determine how to integrate their data with data across the department, speed up AI deployment, and examine “how to bring AI and data to the tactical edge,” for U.S. troops, said Hicks. 
“I think we have to have a cultural change where we’re thinking about safety across all of our components. We’re putting in place [verification and validation and testing and experimentation] approaches that can really ensure that we’re getting the safest capabilities forward,” she said. 
The Defense Department, she said, would look beyond just educating the technical workforce on safety issues and would also reach out to “everyone throughout the department.”

Source: https://www.defenseone.com/technology/2021/06/us-needs-defend-its-artificial-intelligence-better-says-pentagon-no-2/174876/