October 27, 2025
Office of Science and Technology Policy
Executive Office of the President
1650 Pennsylvania Avenue NW
Washington, DC 20502
Re: Notice of Request for Information, Regulatory Reform on Artificial Intelligence (Docket ID: OSTP–TECH–2025–0067)
Ryan Nabil
Director and Senior Fellow, Technology Policy
National Taxpayers Union Foundation
122 C St NW
Washington, DC 20001
Introduction
On behalf of National Taxpayers Union Foundation (NTUF), I appreciate the opportunity to submit these comments in response to the White House’s request for input on the U.S. approach to AI regulation.1 Based in Washington, DC, National Taxpayers Union is the oldest taxpayer advocacy organization in the United States. Its affiliated think tank, NTUF, conducts analysis on economic and technology policy issues affecting taxpayers, including U.S. and international approaches to emerging technologies and innovation policy.
NTUF appreciates the Administration’s recognition of the need to promote a regulatory environment that supports the responsible development of artificial intelligence and AI-enabled applications—advancing innovation while addressing associated risks. As the Administration develops a more detailed approach to AI governance, it has an opportunity to strengthen the coherence and effectiveness of the U.S. regulatory framework. Achieving this goal will require modernizing regulatory processes to remove duplication and outdated procedures, addressing gaps in oversight, improving interagency coordination, and establishing structured mechanisms—such as pilot programs and regulatory sandboxes—to generate evidence for continuous improvement.
Regulatory Challenges for U.S. AI Governance
As the Administration reviews the U.S. approach to AI governance, it would benefit from close attention to several emerging regulatory challenges:
1. Fragmented Regulatory Landscape
The growing number of state-level AI laws—combined with the absence of a federal framework—has produced an increasingly fragmented regulatory landscape.2 At the federal level, overlapping mandates and inconsistent interpretations across agencies create uncertainty for businesses. Although initiatives such as the National Institute of Standards and Technology’s (NIST) AI Risk Management Framework and ongoing interagency discussions have improved shared understanding, they have not yet established effective mechanisms for coordination or harmonization. Without a coherent federal approach, firms will continue to face duplicative requirements and uneven compliance expectations across state boundaries and federal agencies.3
2. Duplicative Regulatory Requirements and Gaps in Oversight
A growing number of reporting and documentation requirements has introduced new layers of compliance without necessarily improving oversight. An increase in federal and state regulations raises the risk that firms in some sectors must satisfy duplicative obligations across multiple agencies, diverting resources from innovation and safety assurance.4 Meanwhile, the absence of a coherent approach can leave certain areas—such as data privacy—without adequate safeguards, despite the proliferation of laws and regulations at both the state and federal levels. The result is a regulatory environment that is at once burdensome and inadequate—layering procedural requirements that increase overall compliance costs without improving consumer safety, accountability, or legal clarity.
Therefore, periodic structured reviews are essential to identify genuine gaps, determine which mandates meaningfully enhance accountability, and eliminate those that impose unnecessary administrative delay or duplication. Such reviews would help ensure that oversight remains evidence-based, proportionate, and outcome-focused.
3. Outdated Certification and Approval Processes
While several agencies have begun exploring new approaches to AI oversight—including the Food and Drug Administration’s (FDA) proposed framework for adaptive algorithms and the Federal Aviation Administration’s (FAA) work on assurance cases for autonomous systems—most existing certification and approval processes remain rooted in frameworks designed for earlier, static technologies.5 These procedures often fail to accommodate adaptive AI systems that evolve after deployment. To remain effective and relevant, certification mechanisms should move beyond one-time pre-market approvals toward life-cycle oversight that reflects continuous learning and model updates.
4. Reliance on Informal Guidance over Formal Rulemaking
While excessive procedural mandates add compliance costs, the increasing use of informal guidance creates a different kind of uncertainty—where rules shift without notice or accountability, compounding the difficulty of long-term planning and responsible innovation.6 As a result, the growing use of policy statements, advisories, and interpretive materials in place of formal rulemaking has introduced regulatory uncertainty for businesses. Because such documents lack binding legal effect and can be revised or withdrawn without due process, they can discourage long-term investment and responsible experimentation.7 In summary, agencies should use formal rulemaking where necessary to create stable, durable standards. However, when they use informal guidance, they must do so transparently, predictably, and with clear procedural safeguards.
5. Limited and Inconsistent Use of Pilot and Experimental Authorities
Several agencies have sought to conduct pilots, grant waivers, or create controlled testing environments, yet these tools remain underused and would benefit from greater coherence and more deliberate regulatory design. Past initiatives—such as the Consumer Financial Protection Bureau’s (CFPB) Compliance Assistance Sandbox in fintech regulation and the SANDBOX Act’s proposed AI sandbox—included only limited mechanisms for translating the findings of such programs into broader sectoral or cross-agency regulatory reform.8 Well-designed and systematically evaluated AI sandboxes and other pilot initiatives can enable regulators to better understand emerging applications of AI within their respective domains and calibrate rules based on evidence. In the absence of such programs, the U.S. regulatory environment risks falling behind jurisdictions that have made structured regulatory experimentation an integral component of their AI oversight frameworks.
6. Gaps in Technical Expertise and Institutional Learning
Although several agencies have taken steps to strengthen technical capacity—for instance, through technical fellowships and agency-specific training initiatives—expertise remains uneven and often insufficient to keep pace with the complexity of modern AI systems.9 Many regulators still lack in-house expertise and mechanisms for incorporating technical learning, model evaluation, and real-world evidence into their rulemaking processes. These shortcomings limit the government’s ability to assess claims made by developers, evaluate sector-specific risks, or update regulations as technologies evolve. Structured mechanisms—such as AI sandbox programs and research partnerships—can address this gap by embedding technical learning within regulatory practice. Complementing these efforts through interagency secondments, targeted fellowships, and sustained collaboration with research institutions would help ensure that U.S. oversight remains informed, proportionate, and adaptive.
Considerations for a Modernized, Streamlined Regulatory Approach
As the Administration reviews the U.S. regulatory approach to artificial intelligence, it would benefit from focusing on the following key areas:
1. Streamlining and Modernizing Existing Regulatory Frameworks
Under the current U.S. regulatory approach, federal agencies already possess broad authority to oversee AI systems and applications within their respective domains.10 The priority should be to update how these authorities are applied to ensure that regulation remains innovation-friendly, proportionate, and responsive to technological change. Agencies should review overlapping reporting, certification, and documentation mandates—including those introduced in recent years—to eliminate duplication and replace outdated procedures with approaches better suited to adaptive technologies and more aligned with current technological realities. More specifically, this review process should focus on removing barriers that add complexity without improving oversight, while preserving safeguards that demonstrably enhance accountability. Such reviews should also identify any genuine gaps to ensure that oversight remains coherent, balanced, and proportionate to risk.
2. Developing Interagency Coordination and Monitoring Mechanisms
The U.S. government is right to pursue a sectoral approach to AI governance, as AI applications vary widely by context, and an overly prescriptive framework that overlooks these differences would risk limiting adaptability and imposing disproportionate regulatory costs.11 However, fragmented sectoral oversight without effective coordination creates the risk of duplicative and inconsistent regulatory requirements across sectors—underscoring the need for more effective interagency coordination and monitoring mechanisms.12
To balance these challenges, the federal government should develop interagency mechanisms to promote coherence in applying AI principles while preserving flexibility for sector-specific oversight. Drawing on recommendations previously submitted by NTUF to the White House in March 2025,13 as well as international best practices, such mechanisms could include shared taxonomies, coordinated risk-assessment frameworks, and standard reporting templates to facilitate information exchange and reduce regulatory overlap.14
Likewise, the White House would benefit from working with congressional leaders to institutionalize mechanisms for shared definitions, interagency coordination, and periodic reviews of overlapping and outdated mandates. Coordination should include a structured feedback process so that lessons from pilots, enforcement actions, and market developments inform subsequent rule revisions. Strengthening interagency feedback loops would promote greater coherence, reduce duplication, and ensure that AI oversight remains adaptive, proportionate, and evidence-based.
3. Establishing Sector-Specific Regulatory Sandboxes for an Evidence-Based, Iterative Approach
The White House should encourage Congress to establish legislative frameworks for sector-specific AI sandboxes and work with agencies to ensure their effective implementation and coordination across agencies. Such programs would help regulators better understand how emerging AI technologies and business models interact with existing regulatory frameworks. They would allow firms to deploy AI systems under close regulatory supervision for a limited period—subject to appropriate waivers or tailored guidance—while regulators observe real-world performance and identify where current rules may be excessive, outdated, or inadequate. Insights generated through these initiatives should inform broader rulemaking and policy evaluation, helping agencies modernize sectoral frameworks and align requirements with demonstrated risks.15
To be effective in promoting an evidence-based, iterative approach to AI governance, however, such programs must be designed with clear objectives, evaluation criteria, and mechanisms for translating lessons learned into lasting regulatory improvements.16
Recent legislative proposals to formalize regulatory experimentation highlight the limitations of the current federal approach to sandboxes. AI sandboxes are not substitutes for carefully designed regulatory frameworks, nor should they be conceived as vehicles for short-term industrial policy. Their value lies in helping regulators understand emerging technologies, identify where existing rules fall short, and generate insights for evidence-based rulemaking and reform.17
The recently introduced SANDBOX Act exemplifies these challenges: it focused narrowly on job creation, centralized authority within the White House, granted overly broad and lengthy waivers without clear justification, and failed to establish mechanisms for translating sandbox insights into broader regulatory improvements.18 A more effective legislative framework would embed evidence generation and institutional learning as core objectives—linking sandbox authorizations to structured evaluation, interagency feedback, and the iterative refinement of rules.19 Properly designed, regulatory sandboxes could become a key mechanism for ensuring that U.S. AI governance evolves in response to evidence while maintaining accountability and proportionality.
4. Improving Transparency and Accountability in the Use of Informal Guidance
Informal guidance plays an essential role in helping firms and regulators navigate emerging technologies. As discussed earlier, excessive reliance on such instruments without clear procedural safeguards can create uncertainty and discourage long-term investment.20 To address these concerns, agencies should more clearly distinguish between formal rulemaking and advisory materials and consolidate all official guidance in a centralized, publicly accessible repository. Greater transparency and consistent interpretive practices would preserve the benefits of flexibility while promoting legal certainty, accountability, and due process in AI governance.
5. Building Technical Capacity and Regulatory Expertise
Ultimately, successful regulatory modernization depends on the extent to which agency staff have the expertise to assess new technologies and integrate technical evidence into decision-making. While several agencies have made progress, technical and analytical capacity in AI governance remains uneven across the federal landscape.21 Agencies should expand specialized roles and fellowships that bring domain experts into regulatory teams and develop shared analytical resources to reduce duplication. Regulatory sandboxes also play an important role in enabling regulators to deepen their understanding of AI applications within their sectors.22 Likewise, collaboration with research institutions can help fill specialized knowledge gaps. Strengthening technical capacity would enable regulators to assess risks more precisely, update rules based on evidence, and keep oversight aligned with the pace of technological change.
Conclusion
As the Administration refines the U.S. approach to AI governance, the priority should be to improve how existing regulatory frameworks function—making them more coherent, transparent, and responsive to technological change. A renewed focus on streamlining, experimentation, and technical capacity would allow agencies to apply their mandates more effectively and maintain consistent, transparent oversight as AI technologies evolve. Taken together, these measures would enhance the overall quality and adaptability of the U.S. regulatory framework, helping to ensure that oversight remains proportionate while maintaining public trust, accountability, and due process. NTUF appreciates the opportunity to provide these comments and stands ready to support the Office of Science and Technology Policy’s continued efforts to strengthen the coherence, transparency, and effectiveness of U.S. AI governance.
1 This document is approved for public dissemination. The document contains no business-proprietary or confidential information. National Science and Technology Council, “Notice of Request for Information; Regulatory Reform on Artificial Intelligence,” Federal Register 90, no. 187 (September 26, 2025): 46422-46424, https://www.federalregister.gov/documents/2025/09/26/2025-18737/notice-of-request-for-information-regulatory-reform-on-artificial-intelligence.
2 Stanford Institute for Human-Centered Artificial Intelligence (HAI), 2025 AI Index Report, chap. 6, “Policy and Governance” (Stanford, CA: Stanford University, 2025), https://hai.stanford.edu/ai-index/2025-ai-index-report/policy-and-governance.
3 Stanford HAI, AI Index Report.
4 Stanford HAI, AI Index Report.
5 Andrew D. Selbst, “An Institutional View of Algorithmic Impact Assessments,” Harvard Journal of Law & Technology 35, no. 1 (2021): 138–139, https://jolt.law.harvard.edu/assets/articlePDFs/v35/35HarvJLTech117.pdf.
6 Nina A. Mendelson, “Regulatory Beneficiaries and Informal Agency Policymaking,” Cornell Law Review 92 (2007): 67–130, https://repository.law.umich.edu/articles/210/.
7 Mendelson, “Regulatory Beneficiaries.”
8 Strengthening Artificial Intelligence Normalization and Diffusion by Oversight and Experimentation Act (SANDBOX) Act, S. 2750, 119th Cong. (2025); Ryan Nabil, Rethinking the SANDBOX Act: Why the United States Needs Better-Designed AI Sandboxes (Washington, DC: National Taxpayers Union Foundation, 2024), https://www.ntu.org/foundation/detail/why-the-united-states-needs-better-designed-ai-sandboxes.
9 U.S. Government Accountability Office (GAO), Artificial Intelligence: Key Practices to Help Ensure Accountability in Federal Use, GAO-23-106811 (Washington, D.C.: GAO, May 16, 2023), https://www.gao.gov/products/gao-23-106811.
10 For an overview of the current U.S. approach to AI governance, see Congressional Research Service (CRS), Regulating Artificial Intelligence: U.S. and International Approaches and Considerations for Congress, Report R48555 (Washington, DC: Library of Congress, June 4, 2025): 4–12, https://www.congress.gov/crs-product/R48555.
11 CRS, Regulating Artificial Intelligence, 4–12.
12 Department for Science, Innovation and Technology (DSIT), A Pro-Innovation Approach to AI Regulation (London: DSIT, March 29, 2023), https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach; CRS, Regulating Artificial Intelligence, 12–14.
13 Ryan Nabil, “Developing a Flexible, Innovation-Focused U.S. Approach to AI Governance,” Comment submitted in response to the White House Request for Information on the Development of an Artificial Intelligence (AI) Action Plan, National Taxpayers Union Foundation, March 15, 2025, https://www.ntu.org/foundation/detail/developing-a-flexible-innovation-focused-us-approach-to-ai-governance.
14 DSIT, A Pro-Innovation Approach to AI Regulation.
15 Ryan Nabil, “Artificial Intelligence Regulatory Sandboxes,” Journal of Law, Economics, and Policy 19, no. 2 (2024): 295–348, https://www.jlep.net/s/Nabil-Final-for-PDF.pdf.
16 Nabil, “Artificial Intelligence Regulatory Sandboxes.”
17 Nabil, Rethinking the SANDBOX Act.
18 SANDBOX Act, S. 2750, 119th Cong. (2025); Nabil, Rethinking the SANDBOX Act.
19 Nabil, Rethinking the SANDBOX Act.
20 Mendelson, “Regulatory Beneficiaries.”
21 GAO, Artificial Intelligence.
22 Nabil, “AI Regulatory Sandboxes.”