By Dan Chenok and Virginia Huth
Virginia Huth currently serves as the SES Assistant Commissioner of the Office of Regulatory and Oversight Systems in the Office of Technology Transformation Services at the U.S. General Services Administration, which is overseeing the modernization of the eRulemaking system. Virginia is writing in her personal capacity; her opinions are her own and do not represent the views of the GSA. Dan Chenok heads the IBM Center for the Business of Government.
Significant attention has been paid of late on how best to approach potential regulation of artificial intelligence (AI). But what about the converse of this proposition – how can AI help governments become more efficient in issuing and analyzing regulations?
A major challenge in the rulemaking process involves managing mass volumes of comments, which agencies must review for substance to inform the basis of the rulemaking. For example, the National Environmental Policy Act rulemaking of 2020 received over 1.1 million comments. AI can improve agency accountability in addressing all substantive comments.
Another major challenge is the complexity of some rulemakings, some over 1,000 pages, relying on scientific studies and data to inform the analysis, and taking years to complete. Yet rulemakings still appear typically as a PDF, with no ability to search on the document for key text. Text analytics that tags data for meaning, would enable an important step forward in establishing machine-readable text. Machine readable text would not only allow rulemakers to identify key parts of a new rulemaking needing coordination, but can also help with both retrospective review of prior rules and identifying opportunities to reduce duplication in multiple rulemakings. Machine readable text can also help the public identify parts of a rule of particular interest to them, helping them to provide meaningful comments.
Some argue that the risks of AI are too high for the regulatory process, and that the current process is sufficient. Yet the current process can be strengthened, especially in how AI can support key regulatory tenets. The key lies in how AI can further the foundation principles of transparency, public engagement, and accountability.
Regulatory Development
Agency analysis and decision making for a rulemaking generally begins with a determination of the need for a regulation, followed by conducting research and gathering information to support the analysis (known as “establishing the record”), and then drafting the text of the proposed rule.
Historically, agency staff have engaged in regulatory analysis in a linear fashion, sifting through many documents of varying degrees of complexity to develop alternatives for review. AI has enabled public and private sector organizations to collect vast amounts of information and array common themes in an organized fashion, in orders of magnitude faster than conventional analysis.
In the same way that cost/benefit analysis and risk estimation are critical tools in the rulemaking process, agencies can consider the costs, benefits, and risks of using AI to support the rulemaking process. A prior report issued through NAPA contends that that AI can reduce human mistakes and correct bias in law enforcement decisionmaking, and those lessons could apply to other regulated sectors.
Public Comment Review
A key issue that AI can both introduce and protect against involves “fake comments” or “fake commenters.” The process for rulemaking is not democratic. While quantity of comments may affect decision-makers on certain issues, the Administrative Procedure Act requires that data, evidence, and a sound rationale drive behind the final decision. Yet perceptions matter; the belief that fraudulently submitted comments could sway the decision-making process is a dangerous threat to the integrity of the rulemaking process.
New AI tools can help agencies more effectively summarize mass volumes of public comments, improving public confidence by reinforcing that substantive comments are of primary value as opposed to volume of comments. A human should always be involved to review the analysis, and current technologies exist today to enable tracing of summary information back to the source document (the public comment) for validation. Greater public awareness of these capabilities can improve public trust; in contrast, lack of technology support in large data environments can lead to incomplete analysis.
It is worth noting that the scope of the “fake” commenter issue has generally been limited to extremely high profile and controversial regulations, such as the Net Neutrality rulemaking at the FCC in 2017. This is not meant to diminish the importance of the challenge of “fake” commenters, but rather to suggest that any solution should consider probability as well as impact when estimating overall risk.
Retrospective Review
A recent article in the University of Pennsylvania Law School Regulatory Review, “Artifical Intelligence for Retrospective Regulatory Review,” by Catherine Sharkey and Cade Mallet with the New York University School of Law, provides an excellent discussion of this issue. The report and case studies are, we think, both encouraging and instructive for governmental creators and users of AI. One lesson learned is that the resources and technical expertise required to carry an AI project to the finish line are rare among federal agencies. Where internal capacity exists, agencies should consider launching pilot projects on algorithmic retrospective review and sharing their tools openly with other federal agencies. The authors conclude that easing AI into prospective rulemaking by learning from and replicating its contributions to retrospective review is a prudent first step.