Editor’s Note: This article was submitted in response to the call for ideas issued by the co-chairs of the National Security Commission on Artificial Intelligence, Eric Schmidt and Robert Work. It addresses the first question (part a. and b.) and third question (part c.) which ask how AI will affect the character and/or the nature of war, and what acquisition and application processes need to change to allow the government to address the AI national security and defense needs of the United States.
Recently, on a military base in the Euphrates River Valley, a soldier described to me how he was supposed to defend the base from a small hostile drone, like the ones used by folks back home to take dramatic shots for real estate listings or weddings. On his plywood desk there was an array of laptops and devices, but none of these products were connected or worked with the others. One of the devices chimed an alert occasionally. These alerts were usually false positives, but he still needed to manually check a few of the other systems, one after another. It wasn’t clear where he should direct his attention or how he should interpret all the information he was receiving.
As the soldier spoke, I recalled having similar feelings when interfacing with military technology to target the Islamic State as a marine in 2016: the innumerable hours watching video footage from aerial platforms to see momentary glimpses of nefarious activity, the sifting of endless intelligence reports to characterize the military use of a target, the decisions made with incomplete information.
The problems the soldier and I encountered were not caused by a lack of technology or information derived from it. Rather, military technology products often create uncertainty and inefficiency because these products are not working together. So, what can this soldier’s experience teach us about the need for system-to-system integration? What could technology integration look like in the context of defending against small drones, and can that model be applied to other defense platforms? This concept is not new. The U.S. military has successfully implemented methods to integrate technology before, like the device attached to the rifle slung around the soldier’s shoulder: the Picatinny rail. To reduce defense technology’s disjointed complexity, platforms with artificial intelligence (AI) should serve as the means to foster integrated systems of capabilities that provide warfighters with clarity and effectiveness.
Seams and gaps in counter-drone technology permeate the “detect-track-defeat” workflow, increasing the time needed to identify a threat and the chances it is missed entirely. The two primary methods of detecting small drones are radar and radio frequency, and there are multiple products associated with each method. A user looks at one system for radio frequency classification and another to see if a radar detected anything. Meanwhile someone else checks a third system to determine what friendly aircraft are nearby. The information that one system displays is hard to compare to others and does not create a coherent picture of the operating environment. Moreover, the options to defeat drone threats are not arrayed in a way that a user can decisively act when confronted with a decision. Modular capabilities that AI integrates into a cohesive system would reduce this uncertainty.
How might AI aid in defending a military base from small drones? It might look something like this: Ideally, a long-range radar scans several kilometers around the base to detect a drone early. An effective radio-frequency detection system also decodes the signal between the drone and its ground-control station to sense and locate the drone. These early detection systems cue another sensor suite with a shorter-range radar and infrared camera to slew to the identified location. A computer runs AI processes — such as statistical filters, computer vision models, and other deep learning methods — to parse and fuse the data from the two radars, radio-frequency sensor, and infrared camera into one drone track, which a single user interface displays in the combat operations center.
AI autonomously integrates the detection methods, reducing both the number of operators and their burden. When a user selects the track in the user interface, they see the video feed from the camera, which is locked onto the drone using computer vision. They also see the frequency the drone uses, determined by their radio-frequency sensor. With the fused track information presented in a cohesive and sensible way, the user can quickly identify the drone as friend or foe and decide whether to take defensive action.
A sensor fusion system with AI integration could also reduce complexity when the military fields new hardware for counter-drone operations. Think of it like attaching a new scope to a rifle using a Picatinny rail. If a vendor fields a new radar, the expectation should be that there is a standard backend platform using AI processes into which that new radar will integrate. People will need to learn how to install and maintain the hardware, but understanding how its input integrates into the existing platform can be and should be transparent to the operator by using AI. Rather than users trying to figure out how to interpret the raw data from the new radar and comparing that interpretation to legacy systems, AI would integrate the radar’s input into the platform the user is already working with.
Fostering a competitive market for these modular technologies and defining the backend AI systems to integrate them will require changing the model and culture of defense acquisition. U.S. Defense Department procurement is complicated, to state the obvious, but the acquisition incentive structure is largely responsible for the disjointed nature of defense technology. For example, “cost-plus” contracting rewards companies for hoarding every aspect of their product. Take last year’s tense hearing in the House Armed Services Committee over the logistics software for F-35 aircraft sustainment. Lockheed Martin claims this software to be intellectual property such that they retain the right to keep it from the government, ensuring sustained cash flow by cornering the F-35 maintenance market for the life of the platform.
However, the customer (i.e., the U.S. government) should dull the reflex to hoard contract value by paying defense companies for products that integrate with other systems while balancing the incentives of intellectual property rights. The United States will lose its edge in defense technology if the default procurement model continues to be inflexible, fixed-price or cost-plus contracts awarded to the same handful of defense primes for expensive, bespoke technology products that are obsolete by the time they are fielded. Platforms with AI integration processes provide a viable alternative to the isolated development of defense technology and the monopolization of its sustainment.
It is critical that U.S. defense acquisitions and contracting personnel understand the value AI processes offer and adapt their organizational culture to accommodate the modern pace of development. Successful AI integration cannot happen without this understanding and cultural adaptation. These professionals should then use more flexible contracting vehicles, such as Part 12 of the Federal Acquisition Regulations, to seek, evaluate, and incorporate the most effective technology for each defense capability.
Returning to the counter-drone scenario, consider a vendor that has an innovative idea for a new radio-frequency detection system. The Defense Department should partner with that vendor in the research and prototype phases. If it performs well throughout a standard evaluation pipeline, the system achieves maturity with a production contract. Along with evaluation, integration ought to be the other focus of the early contracting phases.
In this example, the new radio-frequency detection system will exist within a counter-drone technology ecosystem and should be brought into the fold as it matures. The vendor retains the source code and patents as its intellectual property, but there should be a defined AI-enabled integration platform for a given defense capability. The vendor gives this platform access to the data gathered by its system through application program interfaces and interface control documents. Thus, the input collected by the vendor’s hardware integrates into a seamless capability that supports, but does not replace, the warfighter.
Using AI to create integrated systems is meant to sort tasks between computers and people based on the strengths of each. Computers and AI are good at repetitive data processing from many sources, then statistically filtering those disparate data inputs to recognize patterns and determine if those inputs are, in fact, the same thing. A computer is better than a person at processing raw radar blips or monitoring a system that detects radio-frequency signals on frequencies commonly used by small drones. Properly trained computer-vision models can distinguish infrared camera images of a bird from those of a drone flying over a kilometer away from the sensor. People wearing night-vision goggles struggle to distinguish the flashing lights of a commercial airliner flying at 30,000 feet from those of a quadcopter drone flying 150 meters above the ground. Trust me, I’ve tried.
Humans, in contrast, are better at using the fused information output to decide what action to take. This concept, of AI fusing raw inputs into a seamless presentation for a person to make a decision with, is a corollary to the Picatinny rail.
The need for the rail arose after the Gulf War, in the early 1990s, as night optics, aiming lasers, and other accessories became small and cheap enough to mount them on an individual soldier’s weapons. Unfortunately, each device attached to a weapon using different hardware. According to Picatinny Arsenal engineer Gary Houtsma, “Most companies were using a rail-grabber of some sort, but they were tight on some rails and loose on others. No one ever had standardized dimensions.” Houtsma and his team created the Picatinny rail, which was fielded in 1995 and became the standard hardware mount for the U.S. military and NATO allies. The rail enabled a weapon to have enhanced capabilities while simplifying the means by which users integrated those capabilities.
AI should serve as the software version of the Picatinny rail: providing more utility and effectiveness without more complexity. The standardized rail allowed both a night optic and an infrared aiming laser to be easily attached and zeroed to a rifle. When a soldier aims a rifle, he sees the input from the laser aligned in the night optic, so when he decides to pull the trigger, the bullet will go where he is aiming. AI should serve the same purpose for other defense capabilities, integrating data into a sensible display of information for a user such that they can take appropriate and effective action.
Joint service leadership throughout the U.S. Department of Defense agrees that the United States is competing with Russia and China for global influence. The U.S. military needs to balance this competition with support for ongoing conflicts against nonstate actors who have adopted inexpensive means of lethality, including small, cheap drones. The U.S. military will be better equipped to deter and respond to these forms of aggression if it adopts integrated technologies that increase a servicemember’s awareness of a physical environment and allow him to act on that awareness. Congress and the U.S. Defense Department ought to direct investment to companies that leverage AI processes to create ecosystems of technologies. These integrated systems will clarify soldiers’ decisions and improve their effectiveness, just as the Picatinny rail brought together hardware to enhance a soldier’s ability to shoot a rifle.
Hans Vreeland is a former marine artillery officer and now leads technical operations at Anduril Industries. His opinions are his own.
Image: Created by War on the Rocks
Special Series, AI and National Security