Breaking News

FTC authority to regulate artificial intelligence – Reuters

The company and law firm names shown above are generated automatically based on the text of the article. We are improving this feature as we continue to test and develop in beta. We welcome feedback, which you can provide using the feedback tab on the right of the page.
July 8, 2021 – The FTC has long exercised its authority to regulate private sector uses of personal information and algorithms that impact consumers. That authority stems from Section 5 of the FTC Act (Section 5), the Fair Credit Reporting Act (FCRA) and Equal Credit Opportunity Act (ECOA).
Section 5 prohibits unfair or deceptive acts or practices in or affecting commerce. An act or practice is considered deceptive if there is a statement, omission or other practice that is likely to mislead a consumer acting reasonably under the circumstances, causing harm to the consumer.
An act or practice is considered unfair if it is likely to cause consumers substantial harm not outweighed by benefits to consumers, or to create competition circumstances where consumers cannot reasonably avoid the harm.
The FTC’s most recent guidance offers examples of how AI deployments could be deemed deceptive (e.g., if organizations overpromise regarding AI performance or fairness) or unfair (e.g., if algorithms impact certain racial or ethnic groups unfairly).
FCRA regulates consumer reporting agencies and the use of consumer reports. The FTC’s AI guidance and enforcement actions make clear that the FTC considers certain algorithmic or AI-based collection and use of data subject to the FCRA.
For example, if an organization purchases a report or score about a consumer from a background check company that was generated using AI tools, and uses that score or report to deny the consumer housing, that organization must provide an adverse action notice to the consumer as required by the FCRA.
The FTC has also noted that organizations that supply data which may be used for AI-based insurance, credit, employment or similar eligibility decisions may have FCRA obligations as “information furnishers.”
The ECOA prohibits discrimination in access to credit based on protected characteristics such as race, color, sex, religion, age and marital status. The FTC notes in both its 2020 and 2021 guidance that if, for example, a company used an algorithm that, either directly or through disparate impact, discriminated against a protected class with respect to credit decisions, the FTC could challenge that practice under the ECOA.
The FTC’s updated guidance provides insight into the expectations for organizations using AI.
•Start with the right foundation: The FTC states that the key to addressing disparate treatment of protected groups is to assess, from the beginning, whether training data sets have gaps. And organizations should consider how they can improve their data sets or establish controls for AI to address any gaps, including limiting how and where the algorithm is used (depending on the potential data shortcomings). This builds on the FTC’s 2020 guidance, which recommended that companies validate and revalidate data sets to not only ensure accuracy but also to avoid unlawful discrimination, as well as the FTC’s 2016 big data report, which details the importance of relying on representative data sets and vetting data sets for bias. The FTC has previously noted that when evaluating the legality of AI, it will consider inputs to the model, “such as whether the model includes ethnically-based factors, or proxies for such factors, such as census tract.”
•Watch out for discriminatory outcomes: The FTC recommends testing algorithms before use and regularly thereafter to “make sure that [organizations do not] discriminate on the basis of race, gender or other protected class.” Again, this builds off the 2020 and 2016 recommendations designed to make AI outcomes fair and ethical. Additionally, the FTC’s 2020 guidance notes that organizations should consider the potential for disparate impact in an AI system’s outcomes. Some questions the FTC suggests to assess the fairness of algorithms are:
(1) How representative is the data set?
(2) Does the data model account for biases?
(3) How accurate are the predictions based on big data?
(4) Does the particular reliance on big data raise ethical or fairness concerns?
•Embrace transparency and independence: In order to reduce the potential for discriminatory outcomes, the FTC suggests embracing transparency and independent review by, for example, conducting and publishing independent audits and publishing source code for outside inspection. The 2020 guidance further notes the importance of being transparent with consumers regarding the use of automated tools – including the factors used to generate any automated decisions.
•Don’t exaggerate what your algorithms can do or whether it can deliver fair or unbiased results: The FTC reminds organizations to not exaggerate what their algorithms can do, as exaggerations may run afoul of the deception provisions of Section 5. This is one of the more straightforward areas for the FTC to enforce against, and typically where the FTC issues guidance about a particular technology, they are vigilant about misrepresentations related to that technology.
•Tell the truth about how you use data: The FTC emphasizes in the 2021 and 2020 guidance that organizations should notify consumers about how and when consumer personal information will be used by or be used to develop AI, especially if the information is sensitive. The FTC notes that failure to properly explain how consumers can control the use of personal information to develop algorithms may lead to enforcement under Section 5.
•Do more good than harm: The FTC advises organizations to ask themselves if their AI models cause more harm than good. If so, the algorithms could be considered “unfair” under Section 5 and therefore subject to enforcement. Algorithms operating in areas like housing, credit, or other circumstances in which inaccuracies could have significant negative effects on consumers should be assessed carefully.
Organizations deploying AI are well-advised to consider whether they are doing so in alignment with the FTC’s recommendations and to consider how best to demonstrate such use is truthful, fair, and equitable in the eyes of the FTC.
The authors would like to thank senior paralegal Brittney Griffin for her contribution to this article.
Opinions expressed are those of the author. They do not reflect the views of Reuters News, which, under the Trust Principles, is committed to integrity, independence, and freedom from bias. Westlaw Today is owned by Thomson Reuters.
Bret CohenBret S. Cohen, a partner at Hogan Lovells based in Washington, D.C., helps businesses comply with privacy, cybersecurity, Internet and consumer protection laws. With a particular focus on the Internet and e-commerce, Cohen has advised extensively on legal issues related to cloud computing, social media, mobile applications, online tracking and analytics, and software development. He can be reached at [email protected].

James DenvilCounsel W. James Denvil advises clients on a range of technology and data issues, including global privacy governance, incident preparedness and response, workforce monitoring, electronic contracting, digital advertising, and public policy initiatives supporting innovative information use and sharing practices. Also based in Washington, he can be reached at [email protected].

Filippo RasoFilippo A. Raso is a Washington-based associate with the firm who helps companies leverage data to deliver innovative solutions while managing legal, reputational and practical risk. Raso counsels companies on a broad range of issues, including the use and disclosure of health information, data breach response, security program development, M&A-related privacy concerns, and the newly enacted California Consumer Privacy Act. He can be reached at [email protected].

Source: https://www.reuters.com/legal/legalindustry/ftc-authority-regulate-artificial-intelligence-2021-07-08/