ChatGPT risks

12 Takeaways from the ‘ChatGPT: Risks and Opportunities for Public Comments in Rulemaking Webinar’

Every day, more ethical questions arise around the development and deployment of AI tools in rulemaking. Last month, the folks at the American Bar Association’s Administrative Law Section went looking for some answers, specifically around issues pertaining to the public comments process.

“ChatGPT: Risks and Opportunity for Public Comments in Rulemaking” was one of the most popular webinars we’ve ever hosted, with over 600 people registered to view the session live. Moderated by DocketScope’s President Dhiren Patel, the session featured an unparalleled panel of experts:

  • Reeve Bull, Deputy Director, Virginia Office of Regulatory Management, Washington, DC
  • Bridget Dooling, Research Professor, GW Regulatory Studies Center, Washington, DC;
  • Sabrina Jawed, Manager, Space Regulations and Standards Branch, Federal Aviation Administration
  • Beth Simone Noveck, Professor, Northeastern University, Boston, MA
  • Connor Raso, Senior Associate General Counsel, Public Company Accounting Oversight Board, Washington, DC

Prompted by thoughtful questions from Dhiren, our panelists spent 90 minutes sharing their ideas about the positive ways that ChatGPT and similar AI-powered tools are poised to impact public comments, rulemaking, and governance – along with some concerns. We saw a few key themes emerge, especially around issues of equity, accountability, and public participation in the rulemaking process.

We invite you to watch the webinar recording at the ABA Administrative Law Section’s YouTube Channel at your convenience. Until you have an opportunity to catch up, here are a dozen key panelist insights to get you thinking.

1. AI can help [commenters] formulate a more cogent argument. For more diverse participants who may never have participated in the public comments process, especially non-native speakers, it’s going to help them to write a good quality comment. – Beth Simone Noveck

2. It means something to have a comment and to be as articulate as possible. If ChatGPT can allow a commenter who may struggle in that area to provide a more articulate comment, that comment is likely to be viewed more seriously by the agency because it may be easier to understand. – Sabrina Jawed

3. You can look at it and dissect, “Here’s how you structure a comment. ChatGPT can help people who aren’t necessarily accustomed to writing in professional English formulate their ideas in a way that looks like a more standard comment. It can make them look more formal and that may help those comments gain more traction with the agency. – Bridget Dooling

4. Sometimes in some fields there’s established group of scholars who dominate publications and dominate the discourse, I’m interested in whether ChatGPT will encourage commentors to be aware of arguments that might be made in papers that haven’t been cited a lot, if it’ll democratize the debate. Maybe ChatGPT will help inject some ideas that are interesting and worthwhile, but sometimes overlooked. – Connor Raso

5. I think that to the extent that the technology could help us translate some of these more technical issues into language that could be understood by a more general population, that would be very helpful. – Sabrina Jawed

6. My only concern is [whether AI-powered technology] increases the practical potential for comments to be misattributed. Of course, there’s that potential now and we’ve seen that happen, but I worry that [AI will] make it a lot easier for that to happen and harder for agencies to spot it. – Connor Raso

7. If [AI-powered technology] is enhancing the quality of the information, then great. I think it’s useful. But if it’s changing it in a way to where the information provided is not accurate, or it may make it sound more credible than it really is, then I think it starts to be problematic. It could potentially deceive the agency in terms of where the comments are coming from or how accurate they are. – Reeve Bull

8. The concern is perhaps that the volume becomes much greater. In the past, these mass comment campaigns all tended to be identical. Now the agency is potentially getting thousands, maybe even tens of thousands of comments that seem credible but when you actually drill down, you realize no, that they’re not, that it’s actually fabricated. – Reeve Bull

9. We need a person in the loop. Even if we are using the technology to assist, summarize, et cetera, we always need a person in the loop to go back and make sure that we are catching everything. Ultimately, it’s still going to be the experts that make the draft and finalize the responses to those comments, even if we’re using the technology to assist in summarizing and identifying unique positions. – Sabrina Jawed

10. What matters is the quality of the idea, not really who wrote it. There is a reason why our rulemaking process doesn’t limit participation. It is fundamentally open, and part of what flows from that is a need to be open to ideas from all kinds of sources, whether they came from something like generative AI or not. – Bridget Dooling

11. From a technical and democratic perspective, [we can get] better quality input, drawing in different communities that have not traditionally had the opportunity to participate in the comments process. And I think that’s what these alternative mechanisms for obtaining public input could really facilitate. – Reeve Bull

12. How do we foster engagement in a more equitable way? It will be very interesting to see how third parties, interest groups, and stakeholders start to use these tools to help people participate in these processes. – Beth Simone Noveck

DocketScope’s intuitive software transforms public comments analysis for proposed regulations, allowing agency policy staff to effortlessly identify the “relevant matter presented” as required by the Administrative Procedure Act, freeing time for agency policy staff to focus on considering the issues raised and writing targeted responses to stakeholder comments. Schedule your demo today.