Breaking News

How The White House “Guidance For Regulation Of Artificial Intelligence” Invites Overregulation

Excessive top-down federal funding and governance of scientific and technology research will be increasingly incompatible with a future of lightly regulated science and technology specifically, and with limited government generally.

Neither political party takes that view though. In a rule-of-experts, “send-the tax-dollars-home” environment, America risks becoming vulnerable to industrial policy and market socialist mechanisms as frontier technologies become more complex.

Addressing infrastructure and other broad initiatives a year ago in his February 5, 2019, State of the Union address, for example, president Donald Trump called for legislation “including investments in the cutting edge industries of the future” and proclaimed, “This is not an option, this is a necessity.”

Along with such spending having thick strings attached and accompanying regulatory effects that propagate, it is not proper for the sciences nor practical applications of them to proceed walled off from one another in the arbitrary legislative appropriations and regulatory environments that prevail in Washiington.

Artificial intelligence in particular serves as a case study – or warning. Emblematic was Executive Order 13859 of February 11, 2019 on “Maintaining American Leadership on Artificial Intelligence” and the establishment of the “AI Initiative,” which were followed by the March 19, 2019 launching of the federal hub AI.gov (now whitehouse.gov/ai).

Executive orders are not law, but they can influence policy, and this one promotes “sustained investment in AI R&D in collaboration with industry, academia,” and other doings.

E.O 13859 also calls for federal collection of data among other centrally coordinated moves. “Actions shall be implemented by agencies that conduct foundational AI R&D, develop and deploy applications of AI technologies, provide educational grants, and regulate and provide guidance for applications of AI technologies.”

Whew. This federalization is concerning on its own, but it occurs in an environment in which much AI research at the federal level happens under the auspices of the Department of Defense.

Bet you didn’t know that the Pentagon, on the very day after Trump’s 2019 AI executive order, released its own AI “strategy,” subtitled “Harnessing AI to Advance Our Security and Prosperity,” describing use, plans, and ethical standards in deployment. There are now new promises by DoD to adopt rules for how it develops and uses AI.  

But where, indeed, is the only spot where a definition of AI is codified in federal statute? In the John S. McCain National Defense Authorization Act for Fiscal Year 2019.

When it comes to robotics and military, the concern is that Isaac Asimov’s famous Laws of Robotics (devised to forbid the harm of humans) are programmed out, not in. This is a part of what makes fusion of government and private AI deployment problematic. Where a tech titan’s one-time motto had been “Don’t Be Evil,” a fitting admonition now for the technology sector as a whole is:

“Don’t Be Government.”

The most recent development is the White House Office of Management and Budget’s 2020 Guidance for Regulation of Artificial Intelligence Applications, directed at heads of federal executive branch agencies. In fulfillment of Trump E.O. 13859 and building upon it, the January 2020 document at first blush strikes the right tone, aiming at engaging the public and forbearance, limiting regulatory overreach, eliminating duplication and redundancy across agencies, improving access to government-data and models, recognizing that a one-size regulatory shoe does not fit all, using performance based objectives rather than rigid rules, and in particular, avoiding over-precaution. For example, the guidance on p. 2 instructs:

“Agencies must avoid a precautionary approach that holds AI systems to such an impossibly high standard that society cannot enjoy their benefits.”

The OMB’s Request for Comments on the Guidance at one point seems to adopt the same reasoned laissez-faire stance: “OMB guidance on these matters seeks to support the U.S. approach to free-market capitalism, federalism, and good regulatory practices (GRPs).”

Michael Kratsios, Chief Technology Officer of the United States, called the Guidance the “first-of-its-kind set of regulatory principles to govern AI development in the private sector” to “address the challenging technical and ethical questions that AI can create.”  

But make no mistake, the new AI guidance constitutes a set of regulatory principles, especially as they will be interpreted by less market-oriented administrations that later assume the helm.

The Guidance states:

“When considering regulations or policies related to AI applications, agencies should continue to promote advancements in technology and innovation, while protecting American technology, economic and national security, privacy, civil liberties, and other American values, including the principles of freedom, human rights, the rule of law.”

The guidance mentions “American values” five times, without recognizing the degree of incompatibility of the top-down administrative state form of governance that now prevails, as distinct from Article I lawmaking, with those values.

Nor is there sufficient appreciation of the extent to which the regulatory bureaucracy can hold conflicting visions of “rule of law.” Today’s administrative state has its own set of value pursuits and visions, of what are costs and what are benefits, and the sources of each. As such, the administration’s AI Guidance contains elements that can be exploited by creative agencies seeking to expand once the ostensibly less-regulatory Trump administration has left the state.

The AI Guidance correctly states: “The deployment of AI holds the promise to improve safety, fairness, welfare, transparency, and other social goals, and America’s maintenance of its status as a global leader in AI development is vital to preserving our economic and national security.” 

But on the other hand, the Guidance (p. 3) says “AI applications could pose risks to privacy, individual rights, autonomy, and civil liberties that must be carefully assessed and appropriately addressed.”

Well that’s interesting. Governments, as post-9/11 and more recent surveillance history shows — not the institution of orderly, competitive free enterprise — are the primary threat to these very values; so opening the door too far to agencies misidentifies sources of “values” problems, and lays bedrock for counterproductive and harmful regulation.

Unfortunately, agencies wanting to be granted the legitimacy necessary to throw their weight around on the new and exciting AI playground have been needlessly invited to do so by the Guidance.

For example, in evaluating “benefits and costs” of regulatory alternatives, agencies are to (p. 12) evaluate “impacts to equity, human dignity, fairness, potential distributive impacts, privacy and civil liberties, and personal freedom.”

These bureau-speak formulations and directives plainly favor agency governmental proclivities moreso than they defer to the competitive process and non-governmental resolutions of the inevitable difficult issues that will naturally arise from the proliferation of AI.

Unless externally restrained, a regulatory bureaucracy’s inclination is to answer the question, “Is there call for regulation?” in the affirmative. The Guidance invites agencies (p. 11) to “consider whether a change in regulatory policy is needed due to the adoption of AI applications in an already regulated industry, or due to the development of substantially new industries facilitated by AI.”

Why would the Trump adiministration open this Pandora’s Box? As a wholly blank canvas, this approach to AI policy will prove an irresistable unleashing of the bureaus. Trump’s regulatory reduction Task Forces notwithstanding, there exists no permanent “Office of No” anchored at any agency to vigorously resist to top-down discretion and reject the more appealing heavy Washington influence they are invited to proffer. 

The unfortunate iron law that industry generally prefers regulation that advances its interests and walls out competition will prove true of AI regulation specifically: “Companies cannot just build new technology and let market forces decide how it will be used,” said one prominent CEO in January 2020.

Companies may dislike like the kind or regulation that makes them ask “Mother-may-I?” before they take a risky step. But on the other hand, established players—especcially given the head start of the government contracting and military presence in AI—will appreciate federal approaches that just so happen to forestall those nettlesome upstarts with a different idea, even when those new ideas advance safety or accountability.

Here are a few additional concerns with federal AI Guidance at this stage.

  • The very first item in the “Template for Agency Plans” (Appendix B, p. 14) writes agencies a blank check and grants regulatory oversight fantasies. Agencies are instructed to produce “Statutory Authorities Directing or Authorizing Agency Regulation of AI Applications” and to “List and describe any statutes that direct or authorize your agency to issue regulations specifically on the development and use of AI applications.” But no statutory definition of AI even existed at the time any such supposed predicates came to be, and this request for statutory rationales for future intervention will be stretched to justify regulation. Indeed, the Guidance fails to recognize or engage Congress nor recognize its primacy in any respect; nor does it even urge agencies to consult with Congress for clarity. Nor is there any nod toward common law remedies in addressing potential hazards, such as what have been called “dangerous animal” and “peeping tom” approaches. If agencies are to have this power, Congress needs to grant it, not the president.
  • The Guidnace invokes prior executive orders and OMB regulatory guidance and pursuits like maximizing net benefits and the preparation of so-called “regulatory impact analyses” as the restraits on excessive AI regulation. But these tools have heretofore not been able to restrain or facilitate regulatory streamlining or anything like a hands-off default. Quite the contrary, they are apt to be used to reinforce rather than refute need for regulation.  
  • The Guidance (p. 12) is favorable toward, and invites expansion of, antitrust regulation: “Agencies should also consider that an AI application could be deployed in a manner that yields anticompetitive effects that favors incumbents at the expense of new market entrants, competitors, or up-stream or down-stream business partners.”  
  • The Guidance invites social policy regulatory experimentation: “AI applications have the potential of reducing present-day discrimination caused by human subjectivity,” the Guidance notes (p. 5). But on the other hand it invites (p. 5) political predation in the form of social engineering experimentation: “When considering regulations or non-regulatory approaches related to AI applications, agencies should consider … issues of fairness and non-discrimination with respect to outcomes and decisions produced by the AI application at issue.” And further (p. 12), “there is a risk that AI’s pursuit of its defined goals may diverge from the underlying or original human intent and cause unintended consequences—including those that negatively impact privacy, civil rights, civil liberties, confidentiality, security, and safety.” Biased AI will leaves dollars on the table that superior AI will collect, which can be a better outcome than premature regulatory ventues that can derail this evolution.
  • The OMB directive may create vulnerability to the very guidance documents that the administation is seeking to restrain elsewhere via executive order. In the aforementioned call for a (premature, grabby) inventory of sector-specific statutory authority, agencies are invited to use their conclusions regarding their authority “to issue non-regulatory policy statements, guidance, or testing and deployment frameworks.”
  • Relatedly there may be inadvertent invitations (p. 7) to gaming and rent-seeking in well-meaning attempts to “allow pilot programs that provide safe harbors” and the systematization of “collaboration with industry, such as development of playbooks and voluntary incentive frameworks.” The White House further encourages (p. 9) “Federal engagement in the development of technical standards and related tools in support of reliable, robust, and trustworthy systems that use AI technologies,” and “Federal engagement with the private sector on the development of voluntary consensus standards will help agencies develop expertise in AI and identify practical standards for use in regulation.” Top down conformity has its downsides such as semi-cartelization, since those “voluntary consensus standards” will only be favored by some, not all, firms and entrepreneurs.

Too frequently there occurs misdiagnosis and denial regarding the root source – government itself — of frontier technologies’ risks. The OMB guidance (p. 6) calls on agencies to “encourage the consideration of safety and security issues throughout the AI design, development, deployment, and operation process.” But the government is more prone to undermine security-enhancing encryption used in private sector applications, for example. And, especially given the heavy government “collaborative” role sought, to indemnify “winner” companies when things go wrong and thereby mangle risk-management mechanisms like insurance and containment in AI “ecosystems.”

Since the administrtion’s AI proclamations belong in the regulatory rather than deregulatory camp, it is good that “strong” AI (the potentially sentient, self-improving version) is ostensibly not addressed (exempted) by the Guidance. Fortunately, the Guidance acknowledges that (p. 11) “current technical challenges in creating interpretable AI can make it difficult for agencies to ensure a level of transparency necessary for humans to understand the decision-making of AI applications.” Indeed, agencies cannot do this; no one can; it is the very nature of black box machine learning. But it is a sure bet that agencies would seize this authority anyway, made apparent in some of the bullets above.

The AI guidance appears in a policy climate in which Republicans and Democrats alike seek major government funding of science generally, an environment replete with proposals that have marinated in the regulatory, administrative state frameworks up to and including a “manufacturing czar,” and quasi -military terminlogy such that energy security gets equated with national security. AI is vulnerable to all this. Internationally, governments are moving toward regulation of AI; and the U.S., by these new actions, has demonstrated readiness to do so as well.

This state of affairs is not particularly the fault of well meaning policymakers within the White House, but results from the fact that there exists no audience or consituency for keeping governments hands out of complex, competitive free enterprise generally. The disruptions purportedly to be caused by AI create irresistable magnets for the opportunistic and cynical to pursue regulation.

Unfortunately in part due to Trump’s order and related/derivative guidance yet to come, we can predict that future administrations and legislators will expand government alliances with a subset of private sector winners, perhaps even a sort of cartelization. The legitimization of this concept at the top by an ostensibly deregulation-oreinted president will make it harder for our decendents to achieve regulatory liberalization and maintain any “separation of technology and state” in future complex undertakings, many of which will be AI-driven.

In similar vein and illustrative of the concerns raised here, the establishment of a “Space Force,” enacted in the National Defense Authorization Act of 2020, presents the same lock-in of a top-down federal managerialism of private sector undertakings, given that commercial space activities have hardly taken root beyond NASA contractors and partners. Making the (AI-driven) force a sixth branch of the armed forces will inevitably alter freedoms and private commercial space activities, heavily influencing technology investment and evolution in a sector that barely exists yet. The Space Force move had already been preceded by a presidential directive on space traffic management complete with tracking, cataloging, and data sharing with government. It is worth remembering that most debris in space used to justify calls for regulation is there thanks to the NASA legacy, not private entrepreneurs who would have needed to ponder property rights in sub-orbital and orbital space in a different way. Even though “normalizing” commercial space activies for a “diverse portfolio of actors and approaches” is not compatible with heavy regulation, the role of competitive discipline may yet be improperly overlooked or squelched.

So the AI Guidance is by no means making an appearance in a policy vacuum, which is not altogether encouraging. In similar vein, an October 2019 executive order established a new President’s Council of Advisors on Science and Technology to “strengthen …. the ties that connect government, industry, and academia.” This project entails “collaborative partnerships across the American science and technology enterprise, which includes an unmatched constellation of public and private educational institutions, research laboratories, corporations, and foundations, [by which] the United States can usher extraordinary new technologies into homes, hospitals, and highways across the world.” Even this appeared in the wake of E.O. 13,885 on “Establishing the National Quantum Initiative Advisory Committee,” aimed at implementing the 2018 National Quantum Initiative Act in its purpose of “supporting research, development, demonstration, and application of quantum information science and technology.”

While big science need not entail big government; the alignment of forces implies that it likely will. There is, however, no commandment to regulate frontier sectors via the same administrative state model that has dominated policy in recent decades, and policymakers are at a fork in the road that will affect the evolution of business and enterprise. On matters of safety, economics and jobs, the government need not steer while the market merely rows.

(This article is based on my comments to OMB on its Request for Comments on the Guidance for Regulation of Artificial Intelligence Applications.)