1) Approaches must ensure that the public can trust artificial intelligence. The ACR suggested the U.S. government work with third parties—such as professional associations—to create validation services, certification measures and real-world performance monitoring agencies.2) The ACR said it agrees with OMB that the public should be involved in federal processes that ensure the transparency and accountability of regulators.
3) Scientific integrity and information quality should inform rulemaking and guidance efforts, the ACR noted. Specifically, “transparency articulating the strengths, weaknesses, intended optimizations or outcomes, bias mitigations, and appropriate use of the regulated AI applications.” Risks and risk mitigations should also be added.
4) Oversight approaches should be based on a “consistent application of risk assessment and risk management” across multiple agencies and technologies, as stated by the OMB. The latter must keep in mind, however, that certain sectors or agencies, such as healthcare, may have gaps in oversight. Third party validation and certification can help in such instances.
5) Benefits and costs related to regulating AI should also be considered when developing specific applications. Collaborating with national associations, such as the ACR Data Science Institute, that represent AI users can ensure resources are being funneled toward innovations that will be adopted and implemented.