Breaking News

What if Machine Learning Is Less Than It Seems?

image

Artificial intelligence (AI)—particularly its most famous iteration, machine learning—has been hailed as the next game-changing technology for military affairs in the lineage of gunpowder, the combustion engine, and the airplane. One of the defining characteristics of AI innovation in today’s geopolitical competition between the United States and China is that it has occurred primarily in the private sector. Whereas Chinese officials adopt a “military-civil fusion” policy to centralize the Chinese Communist Party’s leverage over private sector AI research, American analysts increasingly call for a “whole-of-nation” approach to basic technological research. While AI is a strategically important technology, what if the long-term utility of machine learning, in particular, is more limited than it seems?

The question might seem odd since private sector AI researchers wear their sense of momentum on their sleeves. DeepMind researcher Nando de Freitas, for example, recently proclaimed “the game is over” and that the keys to advancing AI are here for the taking. Breakthroughs in AIs’ capacities to match or exceed human abilities have driven up interest in machine learning. Examples include AlphaGo’s victories over professional Go player Lee Sedol, GPT-3’s ability to write articles and code, Gato’s ability to perform multiple specialized tasks as opposed to just one, and AlphaFold’s fresh breakthrough on all known protein structures. In national security, seemingly not a day goes by without news of Department of Defense (DOD) efforts to procure emerging technologies, including semi-autonomous, unmanned systems.

But even in the hype-heavy field of AI, some prominent members have begun expressing doubt about machine learning’s role in AI’s future development. While machine learning, particularly deep learning, has been relatively successful in carefully tailored, narrow domains, prominent AI researcher Yann LeCun recently released a position paper detailing the limitations and insufficiencies of current machine learning techniques. AI expert Gary Marcus has also been outspoken about AI systems’ chronic problems with reliability and novelty, noting that the cost of not changing course “is likely to be a winter of deflated expectations.”

Amid the flood of AI-related developments in military technology in the United States, some researchers have begun to link high technological expectations to age-old, lofty visions of command and control. But, as Ian Reynolds writes, “dreams of technologically enabled sensemaking and fast decisions die hard.” Similarly, U.S. Navy Commander Edgar Jatho and Naval Postgraduate School Researcher Joshua A. Kroll highlight insufficient attention among national security analysts to the implications of current frailties and shortcomings in AI systems and what they mean for future developments in AI-enabled military technology.

It is not inconceivable that disappointing developments in AI systems in the coming years will cause the field to enter a new “AI Winter”—a step in a historical cycle in which inflated AI expectations lead to disillusionment with the technology’s potential followed by a drop in private funding. In this scenario, the United States’s efforts to keep its innovative edge will suffer if it has not taken steps to maintain a sufficient degree of research and development in AI independent of the private sector, which builds from an acknowledgment of its current shortcomings.

Although one might object, an AI Winter cannot be guaranteed to occur in the first place. However, preparing for one allows the U.S. military to use every bit of leverage it has over the United States’s innovation ecosystem to remain the world’s preeminent technological power—it does not hurt to prepare for the worst. Ideally, the DOD and other branches of the U.S. government use their relationships with private sector AI firms to prevent an AI Winter from occurring.

How should this be done?

First, minority voices in AI pursuing research agendas outside of machine learning should be promoted by the DOD by awarding them public-private partnerships and acquisitions or procurements. While it would be unwise for lesser-known techniques in AI to become the DOD’s primary focus in the near term, the DOD can walk and chew gum at the same time. The Defense Department has been called on before to send clearer signals “to stimulate private-sector investment” in critical technologies like AI. One such signal should be that, while machine learning has broad applicability across the armed forces and will continue to be relevant for defense purposes, AI research and development in the private sector that views machine learning as just one piece of the AI puzzle will be rewarded. But how will plausible minority agendas in private sector AI research be distinguished from quieter voices merely exaggerating the potential of their designs?

This leads to the second recommendation: individuals of diverse intellectual and educational backgrounds should be consulted on developing metrics for testing, evaluation, verification, and validation of AI systems. Adjusting the DOD’s focus to the long-term development of AI-enabled military technology requires an understanding of which minority voices in private sector AI research are presenting plausible (even if long-shot) pathways beyond machine learning, linking this recommendation to the first. For all the talk of AI systems requiring the right quantities and qualities of training data, AIs and their applications themselves are data that require interpretation. The content of GPT-3’s published article, AlphaGo’s strategies when playing Go, and the like are all pieces of data that humans must interpret. But without the proper lens to view such data, an AI system or application can be mistakenly interpreted as either more or less advanced or valuable than it is.

Thus, expertise from fields including cognitive psychology, neuroscience, and even anthropology, as well as researchers of interdisciplinary backgrounds—in addition to those trained primarily in AI research or development—would help military organizations stake out independent tracks for AI-enabled military technology that resist the often misaligned ends of private sector research.

Finally, and most obviously, the United States government should increase its investment in basic research “spanning the spectrum of technological disciplines,” as Martijn Rasser and Megan Lamberth recommend. Many have called for such an increase in recent years, and the possibility of such appropriations looms with the U.S. Senate and House of Representatives passing the Chips and Science Act in July of this year. This recommendation does not require elaboration other than to note that increases in scientific research funding ought to be accompanied by grounded ways of interpreting AI developments and refined, long-term visions for AI’s designs.

Vincent J. Carchidi holds an M.A. in political science from Villanova University. He specializes in the intersection of technology and international affairs, and his work on these subjects has appeared in War on the Rocks, AI & Society, and the Human Rights Review.

Image: Flickr/U.S. Department of Defense.