If Artificial Intelligence Can’t Be Defined, How Can It Be Regulated?

The Senate hosted its inaugural Artificial Intelligence (AI) Insight Forum this week, described by Senate Majority Leader Chuck Schumer (D-NY) as “a gathering unlike any seen before, to debate a topic unlike any other.” The forum—the first of a series to be hosted throughout fall 2023—brought together “some of America’s leading voices in AI” and builds on earlier work of the Senate Committee on the Judiciary.  

Sen. Schumer has hailed the bipartisan efforts of Senators working on the issue but makes no bones about the difficulty of the task ahead:

Legislating on AI is certainly not going to be easy. In fact, it will be one of the most difficult things we have ever undertaken, but we cannot behave like ostriches sticking our heads in the sand when it comes to AI.

Via Reuters

The first major challenge is defining exactly what is meant by “AI.” Without a clear, unequivocal definition of what is to be regulated, the regulations’ success is doubtful. Yet, forming a satisfactory definition for this purpose is no easy task.

For a start, Wikiwand (an extension of Wikipedia) defines AI as “the ability of machines to perform tasks that are typically associated with human intelligence, such as learning and problem-solving.” But this definition is insufficient and too broad, as it captures a vast array of algorithms imitating the step-by-step reasoning humans are taught to use to solve puzzles or make logical deductions. This definition would capture the punch-card technology used to govern Jacquard weaving looms alongside social media content recommendations (not to mention knitting patterns and cooking recipes, if taken to extremes). It is also misleading, as humans rarely use the disciplined step-by-step reasoning embodied in these algorithms when making decisions. The sheer size of the computation tasks required for a complete assessment of even a small number of options usually means humans rush to judgments—and hence make inconsistent decisions when faced with an identical set of data and decision-making context, as noted by Daniel Kahneman, Olivier Sibony, and Cass R. Sunstein in Noise: A Flaw in Human Judgement (William Collins, 2022).

While substituting “human” with “artificial” intelligence in a phrase is easy, what is meant by “intelligence”? Although we know a lot about how the human brain works, that knowledge is far from complete and there is no consensus as to what exactly human intelligence is. Until that comes about, it is impossible to precisely define how that intelligence can be imitated artificially. That has not dissuaded some from homing in on computer models’ ability to “learn”—or update some of their governing parameters on the basis of data encountered during operation without having to be explicitly programmed to do so—as the defining characteristic of their intelligence. Should the appropriate target of regulation, then, be machine learning rather than artificial intelligence? Yet this definition is problematic, as it requires the “machine” to be defined. It may appear simple to substitute “computer” for “machine”, but what exactly constitutes a “computer”? Is it a small-scale programmable chip or the sum of thousands of chips and software? Or something else?

Given this definitional difficulty, it is unsurprising to find that AI regulatory efforts have largely skipped over the task of defining the regulated object. The European Union AI Act eschews a precise definition, instead providing rules establishing “obligations for providers and users depending on the level of risk from artificial intelligence.” That is, it regulates providers and users of AI and the sectors they operate in, not the technology itself. The aim is to “promote the uptake of human-centric and trustworthy AI and protect the health, safety, fundamental rights and democracy from its harmful effects.” Similarly, under Canada’s AI and Data Act,

businesses will be held responsible for the AI activities under their control. They will be required to implement new governance mechanisms and policies that will consider and address the risks of their AI system and give users enough information to make informed decisions.

The act regulates businesses, not the technology.

The EU act gets even further away from defining AI as it seeks to identify high-risk applications. These include “AI systems that pose significant harm to people’s health, safety, fundamental rights or the environment.” The act includes bans on intrusive and discriminatory uses of AI, including biometric identification systems; predictive policing systems based on profiling; location and past criminal behavior; and untargeted scraping of facial images from the internet or CCTV footage. It seems somewhat ironic that the AI-version of police profiling is banned while the human version escapes scrutiny—and that app operators of AI-enhanced photos fear prosecution but airbrushing is apparently okay.

At the very least, it is to be hoped that the US debate is clear (and honest) in defining what is being regulated and why, and the harms that will be avoided as a consequence.