Letter to the White House: The Need for A Flexible and Innovative AI Framework

(pdf)

Docket ID: OSTP-TECH-2023-0007

Office of Science and Technology Policy
Executive Office of the President
1650 Pennsylvania Avenue NW
Washington, DC 20502

Re: Developing a Flexible, Innovation-Focused U.S. Approach to AI Regulation

Ryan Nabil
Director and Senior Fellow, Technology Policy
National Taxpayers Union Foundation
122 C St NW
Washington, DC 20001

July 7, 2023

On behalf of the National Taxpayers Union Foundation (NTUF), I welcome the opportunity to submit the following written evidence in response to the Office of Science and Technology Policy’s request for comments on the U.S. approach to AI regulation.[1] Located in Washington, DC, the National Taxpayers Union is the oldest taxpayer advocacy organization in the United States. Its affiliated think-tank, NTUF, conducts evidence-based research on economic and technology policy issues of interest to taxpayers, including U.S. and international approaches to artificial intelligence, emerging technologies, and data protection.

NTUF appreciates the recognition by the Biden and Trump administrations of the need to create a more favorable regulatory environment in which artificial intelligence and AI-enabled business models can thrive and promote economic growth and competitiveness. As the White House seeks to develop the U.S. approach to AI in greater detail, it has an opportunity to strengthen America’s position as a global center of AI innovation. To accomplish that goal, the U.S. needs to adopt a flexible, evidence-based approach to AI, which distinguishes between widely varying applications of AI in different contexts and designs proportionate and context-specific rules accordingly. We believe that the U.S. AI strategy would benefit from adopting the following recommendations:

  1. Congress and the Biden administration must refrain from passing a premature comprehensive AI governance statute that could hamstring AI innovation in the long term. Instead, the U.S. needs to adopt a flexible, innovation-focused approach that outlines the government’s AI principles, establishes the U.S. AI framework and creates mechanisms to implement it, and develops measures to promote innovation and mitigate AI risks.
  2. The United States would benefit from more closely evaluating the AI governance strategies of major jurisdictions—like the European Union, the United Kingdom, Japan, and Switzerland—in understanding how best to design a flexible, well-balanced approach to AI.
  3. Given the widely divergent applications of AI to different sectors and business functions, the U.S. should regulate the applications of AI, rather than the underlying technology.
  4. To prevent regulatory fragmentation, the government should propose mechanisms that support the implementation of the U.S. AI framework.
  5. Instead of classifying the use of AI in certain sectors as “high-risk,” the U.S. should consider developing risk assessment frameworks to identify, prioritize, and mitigate AI risks.
  6. The United States should develop mechanisms to seek input from the private sector, academic institutions, and civil society in developing and calibrating AI rules.
  7. Well-designed AI sandbox programs can help improve the regulatory understanding of AI technologies and business models, design more flexible AI rules, and promote innovation.
  8. Designing reciprocal sandbox arrangements with like-minded jurisdictions – such as the UK, the EU, and Switzerland – can promote cross-border innovation and regulatory cooperation.
  9. The U.S. government should strengthen bilateral cooperation with like-minded partner countries and contribute more actively to the development of international AI norms through multilateral institutions, such as the Organisation for Economic Cooperation and Development (OECD) and the Global Partnership for AI.

I. Developing a Flexible, Innovation-Focused Approach to AI Governance

As leading jurisdictions around the world – from the European Union to Japan and the UK – develop their approach to AI governance, the United States faces growing calls to develop AI legislation. However, while the United States should develop an AI framework, Congress should refrain from passing a one-size-fits-all comprehensive AI legislation that could constrain regulatory flexibility, struggle to keep pace with technological change and emerging risks, and harm innovation. Instead, a better strategy would entail the creation of a create a flexible, principles-based AI framework that develops well-calibrated and proportionate rules according to the specific risks associated with AI use in a specific context. Without a well-balanced, carefully designed regulatory strategy, the United States runs the risk of hampering the country’s long-term AI potential.

In developing the national AI framework, U.S. lawmakers would benefit from evaluating the AI governance approaches of leading jurisdictions such as the EU, the UK, and Japan. While a detailed discussion of such national strategies goes beyond the scope of this submission, understanding different regulatory approaches – particularly between the EU and the UK – can be instructive in designing a flexible, evidence-based approach to AI governance.  

As is the case under many civil law jurisdictions, the EU’s approach to AI regulation is characterized by preemptory, detailed, and carefully negotiated legislation that seeks to predict and mitigate future risks from AI applications – as opposed to developing broader statutory principles and enabling regulators and courts to play a more active role in determining how such principles should apply to specific AI applications in light of new technological developments.

By the end of this year, the EU seeks to pass the Artificial Intelligence Act, which would likely be the world’s first comprehensive AI legislation and regulate AI use in almost every sector across the European single market.[2] Under EU constitutional law, certain legislation like the AI Act require unanimous consensus and ratification by all 27 member states and therefore involve multiple rounds of negotiations and redrafting become it can finally become law. Therefore, the procedural benefits of passing single comprehensive legislation instead of multiple sectoral laws are all too understandable in the European context. Nevertheless, many of the AI Act’s restrictive proposals – such as its vague and overly broad definition of AI and classifications of high-risk AI systems – risk hampering Europe’s innovation potential, as pointed out by leading European scientists and policymakers,[3] numerous companies like Siemens and groups such as the German AI Association,[4] and national and regional governments like France and Germany’s Bavarian state government.[5] 

In contrast, the UK has advocated a more flexible, context-specific approach to AI, which seeks to regulate AI applications in different contexts, rather than the underlying AI technologies. Instead of developing comprehensive AI legislation like the EU’s AI Act, the UK government has proposed AI principles and a non-statutory AI framework, which regulators would apply to AI applications within their remit.[6] Case law and jurisprudence by English courts would further clarify how existing statutes apply to AI applications, and the government reserves the right to introduce legislation to update sectoral rules if and when necessary. Like the UK, the Japanese government has also advocated a light-touch, principles-based approach to AI regulation, which aims at promoting innovation and economic growth in light of Japan’s economic and demographic challenges.[7] 

Given the similarity of the English and U.S. legal systems, we believe that the UK’s flexible, pro-innovation approach represents a better-suited model for the U.S. than the EU’s current approach to AI governance. A well-calibrated, context-specific approach would allow the United States to remain flexible in updating its regulatory frameworks in light of new technological developments and emerging risks. Such an approach would also make it easier for sectoral legal frameworks to remain technology neutral and allow regulators to apply the same rules and standards to the application of other emerging technologies like quantum computing and communications – instead of having to develop separate statutes for each new wave of technologies.[8] To that end, the U.S. government should consider designing a flexible AI framework that outlines broader U.S. AI principles and guidelines for regulators and includes, amongst others, mechanisms to implement the AI framework and policies to encourage innovation and mitigate future risks.

II. Proportionate, Context-Specific Framework for Regulating AI in Different Sectors

The U.S. government should adopt a proportionate, context-specific approach to develop well-calibrated rules for different uses of AI technologies in various sectors. In light of recent developments of generative AI tools like Google Bard and ChatGPT, the U.S. faces growing calls to pass legislation that will regulate AI. However, a major difference between AI and many previous technologies – such as atomic energy and space technologies – is AI’s potential uses in a much wider segment of the economy, from healthcare to retail and financial services. The specific risks that AI poses in such sectors depend on the precise context AI is used, rather than the underlying technologies themselves. That is why a proportionate approach to AI regulation should consider the precise contexts in which AI is used and develop well-calibrated rules for specific uses, instead of setting fixed rules and risk ratings for AI use in all sectors, or even within the same sector.[9] 

For example, the risks to consumers associated with AI-enabled chatbots for retail customer support are lower than potential AI applications in medical diagnostics. Accordingly, a context-specific, proportionate approach would consider the distinct risks associated with AI applications in different circumstances and calibrate rules accordingly. Likewise, even within high-risk sectors, such as critical infrastructure, not all AI use poses the same level of risk. For instance, whereas using AI algorithms to optimize the operations of a nuclear plant carries significant risk, its uses to detect minor cosmetic flaws, like surface damages, within the same plant carry much lower risks. Accordingly, classifying entire sectors as low or high-risk would not comprise a proportionate regulatory approach.[10] 

Instead, a more sensible approach would entail the creation of an AI framework that sets out the overall AI principles and clarifies the regulatory characteristics of such a framework (Table A1). The UK has adopted five AI principles based on the OECD’s guidelines for trustworthy AI: i) Safety, security, and robustness; ii) Appropriate transparency and explainability; iii) Fairness; iv) Accountability and governance; and v) Contestability and redress.[11] The Japanese government – whose policy document contributed to the formulation of the OECD’s AI principles – also recognizes and suggests similarly phrased principles in its AI governance guidelines.[12] 

Once the general principles are developed, they should form the basis of an overall AI framework. The framework should develop guidelines for sectoral regulators to apply the framework to specific AI uses in different contexts according to the specific risks they pose (Table A2). Regulators would then regulate AI within their remit while adhering to the guidelines outlined in the AI framework.

Ultimately, for such a framework to be effective in the U.S. context, Congress would need to provide a statutory basis for establishing U.S. AI principles, creating oversight over regulators for applying AI rules uniformly across different sectors, and developing mechanisms for inter-agency coordination. Furthermore, to ensure that sector-specific AI rules do not hamper innovation, U.S. lawmakers should also consider adding innovation as a statutory duty for regulators in enforcing the AI framework. Such a measure would help ensure that regulators not only consider identified and prioritized AI risks in agency rulemaking but that they also consider the potential risks of slowed innovation due to an overly restrictive regulatory approach.[13]

III. Mechanisms to Support the Implementation of the U.S. AI Framework

The U.S. government should consider developing mechanisms to support the implementation of the U.S. AI framework and help ensure that AI principles and guidelines are applied uniformly across different sectors. While a principles-based, context-specific approach to AI would allow the United States to develop flexible and well-calibrated rules for AI in different sectors, this strategy comes with certain challenges that would need to be addressed in the U.S. AI framework.

A central challenge is that, since individual regulators have the flexibility to issue guidelines and adjust rules based on broader AI principles, there is a risk that such guidelines are not applied uniformly across different sectors.[14] Such differences will not only create market uncertainties but would also pose a particular challenge when certain AI applications come under the jurisdiction of multiple regulators. A hypothetical example would entail the regulation of an AI-enabled investment advisory product dealing with the personal data of users – which could be subject to the overlapping jurisdiction of the Federal Trade Commission, the Federal Deposit Insurance Corporation, the Consumer Financial Protection Bureau, and even state regulators.[15] The U.S. AI frameworks should, therefore, design preemptive mechanisms to address the potential challenge of regulatory inconsistencies that could arise in a more flexible, decentralized AI governance approach.

The UK’s proposed mechanisms for the implementation of its AI framework could provide a useful starting point for U.S. policymakers for thinking more analytically about such issues and designing policies accordingly. The UK’s AI White Paper proposes seven supporting mechanisms for the following objectives: i) monitoring the overall effectiveness of the AI framework; ii) supporting the coherent application of AI principles across the economy; iii) assessing and addressing cross-sectoral risks from AI applications; iv) providing support and guidance to businesses; v) improving business awareness and consumer awareness of trustworthy AI; vi) conducting horizontal scanning for emerging risks and regulatory trends; and vii) monitoring global regulatory developments.[16] 

While the precise mechanisms would need to be calibrated and adapted to U.S. policy objectives and regulatory architecture, these proposals point to important challenges that U.S. lawmakers should consider while pursuing a more decentralized approach to AI regulation. The table below provides some potential mechanisms – based on the UK government’s AI White Paper – that Congress and the Biden administration could consider while designing the U.S. AI framework (Table 1).

Table 1. Functions to Support the Implementation of a Potential U.S. AI Framework

Functions

Potential Activities

1) Monitoring, Assessment, and Feedback

i) Develop and maintain monitoring and evaluation mechanisms to assess the economic impacts of the U.S. AI framework across different sectors and for the entire economy.

ii) Collect data and stakeholder input from regulators, the private sector, think tanks, and academic institutions to evaluate the U.S. AI framework’s overall effectiveness.

iii) Monitor the framework’s effectiveness in maintaining a proportionate approach to AI.

iv) Assess the effectiveness of regulatory coordination between different agencies in regulating AI.

2) Coherent Implementation of AI Principles

i) Develop guidelines to support regulators in implementing the U.S. AI framework.

ii) Identify potential inconsistencies in the way that different regulators apply AI principles.

iii) Create a platform for regulators to discuss and address regulatory inconsistencies.

iv) Monitor the continued relevance of the AI principles established in the framework.

3) Cross-Sectoral Risk Assessment

i) Create a risk register of potential AI risks to evaluate different risks and support the development of the cross-sector risk assessment framework.

ii) Monitor and review prioritized risks and identify emerging risks.

iii) Provide a platform to clarify regulatory responsibilities, issue joint regulatory guidance, and share regulatory best practices.

4) Support for Innovators

i) Identify potential regulatory barriers to AI innovation in different sectors.

ii) Assist regulators in creating and monitoring the effectiveness of AI sandboxes.

5) Education and Awareness

i) Provide informal guidance to businesses on navigating the U.S. AI regulatory landscape.

ii) Advise start-ups and companies on identifying and applying to the appropriate sandbox.

iii) Improve consumer awareness and public trust about how AI is regulated in the U.S.

iv) Support the creation of innovation hubs, which are typically launched by regulators to provide start-ups and companies information about AI-related legal obligations, identify business opportunities, and invest in the U.S. AI ecosystem. Innovation hubs can also help start-ups identify and apply to the appropriate sectoral AI sandbox.

6) Horizontal Regulatory Scanning

i) Monitor emerging trends in U.S. and global AI governance, new technological developments, and emerging AI risks.

ii) Work with actors from the private sector, universities, and think tanks to identify, prioritize, and mitigate emerging risks.

7) International Regulatory Frameworks

i) Monitor AI-related foreign legislation and global regulatory developments and evaluate potential implications for the U.S. regulatory approach and the broader AI ecosystem.

ii) Provide recommendations on improving cross-border regulatory cooperation on AI.

iii) Monitor alignment between the U.S. and international AI frameworks developed by multilateral organizations like the OECD and the Global Partnership on AI.

iv) Evaluate U.S. compatibility with global AI standards and identify opportunities to harmonize standards and reduce barriers to trade and cross-border data flows.

iv) Recommend policies to the U.S. regulatory approach based on the successes and failures of regulatory approaches in the EU, the UK, Japan, and other major jurisdictions.

Source: Author based on recommendations by the UK Department of Science and Technology and the Office for AI (2023).[17]

IV. Risk Assessment Mechanisms to Identify and Mitigate Future AI Risks

A major challenge in AI governance is to develop a proportionate risk-management framework to identify, prioritize, and mitigate potential risks. The differences in how various jurisdictions seek to evaluate and mitigate such risks can provide insights into how U.S. lawmakers could develop an agile, multi-stakeholder framework to identify and mitigate future risks. At the risk of oversimplification, the EU’s proposed AI Act classifies AI systems into four categories of risks i) “Minimal-risk” AI systems, which require AI developers to comply with a code of conduct; ii) “Limited-risk” AI systems that require providers to comply with certain transparency requirements; iii) “High-risk” AI systems that must undergo a more rigorous conformity assessment; and iv) AI systems with “unacceptable risks,” which are banned across the EU (Table A3). The EU also provides lists of AI usage that would be classified as “limited” and “high risk.” (Table A3). [18]

While the European Union’s risk-based approach sounds reasonable on a prima facie basis, it has two major problems. First, unlike the UK’s context-specific approach, the EU’s one-size-fits-all approach to risk assessment does not create a flexible enough framework that distinguishes between risks associated with different AI applications within the same sector. For instance, whereas the EU’s AI Act would treat all AI-enabled tasks related to the operation and management of critical infrastructure as “high risk,”[19] the UK’s context-specific approach recognizes that, even within high-risk sectors like critical infrastructure, not all AI-enabled tools use carry the same risks and should not be subject to uniform compliance and liability standards.[20] 

Under the European Union’s proposed AI Act, low-risk AI applications within sectors classified as “high-risk” –such as education, employment, and law – would be subject to much more restrictive regulations than would be the case under the UK’s AI framework (Table A3). For example, since the EU considers the use of AI in education as high risk, AI-enabled language proficiency examinations by online platforms  – which often provide a much cheaper and more accessible alternative to traditional language tests like the TOEFL and IELTS– would be subject to the same compliance standards as the use of AI in other high-risk areas like medical diagnostics and critical infrastructure.[21] Such a restrictive approach risks hampering innovation in online learning platforms, legal services, and other areas that the EU classifies as “high risk” under the AI Act.[22] 

Notwithstanding this restrictive approach, the AI Act’s risk assessment framework might struggle to be flexible in addressing future risks. Although generative AI applications like ChatGPT have become widespread since the autumn of last year, the pace and scope of their rapid development would have been difficult to predict even five years ago. Likewise, despite the best efforts of lawmakers, regulators, and technologists alike, the business of making predictions about future AI risks remains a highly uncertain one. As such, it is difficult to accurately predict the AI landscape ten years from now and the unique set of risks and challenges such developments will pose. If the U.S. adopts a similar approach of classifying prespecified AI uses as “high risk” in statutes, it risks constraining innovation while remaining less flexible in identifying and mitigating future AI risks.

Compared to the EU’s approach, the UK’s proposed strategy of continuously monitoring AI risks and enabling public-private collaboration to identify emerging risks represents a more flexible approach to risk management. Instead of classifying a list of AI applications as high risk, the UK has instead proposed a principles-based risk assessment framework, which sectoral regulators will use to evaluate risks within their regulatory remit. Furthermore, the UK government has proposed the creation of “central risk functions” – separate from sectoral regulators – that would play a central role in monitoring the effectiveness of the AI framework, monitoring current and future AI risks, and providing advice to the government on which risks should be prioritized. With closer regulatory cooperation between the government, regulators, and the private sector, this approach is more likely to enable more robust monitoring of potential AI risks, as well as the introduction or calibration of appropriate statutory instruments to address future risks as they emerge.[23] 

A comparable U.S. mechanism – involving Congress and the federal government, sectoral regulators, the private sector, and independent risk evaluators – could be designed to identify and respond to future AI risks (Table A4). As part of this arrangement, Congress and the federal government would establish the overall U.S. AI framework and clarify risk management guidelines for sectoral regulators based on the AI framework. In turn, the sectoral regulators would enforce such guidelines within their regulatory remit, address prioritized AI risks, calibrate rules based on regulatory experience and stakeholder input, and recommend whether the U.S. AI framework should prioritize other emerging risks (Table A4). The central risk function – ideally comprising experts, officials, and private sector representatives independent of the sectoral regulators – would evaluate the effectiveness of this framework, identify emerging AI risks, advise Congress and the federal government whether an intervention is required to address such risks, and if so, which regulators are best suited to address such emerging risks (Table A4).[24] 

While such proposals need to be more carefully evaluated and adjusted to suit the unique features of the U.S. regulatory architecture and policy objectives, they provide a useful starting point for thinking more strategically about ways to address future AI risks while maintaining a flexible regulatory approach. Furthermore, identifying and addressing potential AI risks and developing mechanisms to identify and address emerging risks would help improve public trust in AI. It would also help alleviate concerns of the public and policymakers alike about i) hypothetical AI risks that are unlikely to come true and ii) overhyped threats to human existence from superintelligent AI systems whose developments are at least several decades away.[25] 

V. Strategies to Engage the Private Sector and Academic Institutions in AI Governance

The Biden administration should consider implementing mechanisms to engage the private sector and academic institutions more closely in AI governance. Such mechanisms are important for two reasons. First, the private sector and academic institutions have been instrumental in driving AI innovation. Second, given the rapidly evolving nature of AI-enabled technologies, the AI governance landscape is often characterized by asymmetric information between regulators and the private sector. Developing mechanisms to continuously solicit feedback from external stakeholders in designing AI regulation is therefore crucial to maintaining a flexible regulatory approach.[26] 

Several policy tools could be incorporated into the U.S. AI framework to pursue closer engagement with private actors in developing AI rules. First, as discussed later, AI sandbox programs can help improve the regulatory understanding of emerging technologies and craft proportionate rules for AI applications in different sectors. Second, innovation hubs can be yet another source of information for AI startups and businesses to become aware of new commercial and investment opportunities, as well as compliance requirements associated with AI applications in different sectors.[27]

Finally, soliciting feedback from businesses and monitoring the economic impact of AI regulations should also be part of the U.S. AI strategy. To that end, AI working groups comprising regulators, academic and policy experts, and business representatives can provide an avenue for continued engagement between the private and public sectors in shaping AI governance.[28] 

VI. Artificial Intelligence Sandboxes to Improve the Regulatory Understanding of AI Technologies and Craft Flexible AI Rules

The U.S. government should create sectoral AI sandboxes to maximize the benefit of a flexible, innovation-focused approach to AI regulation. Such programs would allow innovative companies to offer innovative AI products under close regulatory supervision for a limited period and receive regulatory waivers, expedited registration, and/or guidance for compliance with relevant laws. Meanwhile, regulators can gain a more in-depth understanding of how emerging AI technologies and business models interact with the existing sectoral rules. Based on such insights, policymakers can craft better rules that help promote AI innovation while minimizing potential risks.[29] 

Recognizing the innovation potential of AI sandboxes, the OECD recommends the creation of such programs at the national level.[30] Following its recently concluded consultation, the UK government is currently evaluating different models of designing AI sandbox program(s).[31] Likewise, as outlined in the draft EU Act, the European Commission encourages the creation of national AI sandboxes in member states (Spain launched the first such sandbox last year).[32] However, such programs need to be designed appropriately to maximize their innovation potential, as NTUF pointed out in its recent AI governance consultation response to the UK government.[33] 

More specifically, U.S. lawmakers should consider creating sector-specific sandboxes to promote AI innovation in specific sectors and update sectoral legal frameworks accordingly. Furthermore, while sandboxes should entail close regulatory supervision and appropriate consumer protection provisions, they must also provide regulatory relief and guidance to make such programs attractive to innovative businesses. Finally, making AI sandboxes open to non-U.S. companies could help attract innovative foreign businesses to the United States and promote innovation.[34] 

VII. International AI Sandboxes to Promote Transatlantic Innovation and Cooperation

To maximize the benefits of AI sandbox programs, the United States should go one step further and design reciprocal AI sandboxes with like-minded countries such as France, Germany, Switzerland, and the UK. While no major jurisdictions have created such a program to the best of our knowledge, U.S. state legislation establishing state-level sandbox programs typically includes language indicating that state governments can create reciprocal sandbox arrangements with foreign regulators.[35] Reciprocal sandbox programs designed at the federal level would provide sandbox participants from signatory countries easier access to the equivalent U.S. sandboxes and vice versa.

Such programs could be particularly attractive to innovative foreign AI startups and companies that seek to understand and comply with U.S. regulatory requirements and launch their products in U.S. markets. Likewise, reciprocal sandboxes could help U.S. businesses understand and comply with foreign regulatory frameworks, such as the EU’s AI Act, and offer innovative products in those markets. By facilitating closer collaboration between foreign regulators and companies and facilitating harmonization of AI rules and standards, reciprocal sandboxes could also help strengthen transatlantic tech cooperation and economic relations.

VIII. Strengthened Bilateral Cooperation and Multilateral Engagement in AI Governance

Beyond AI sandboxes, the United States should consider other mechanisms – such as joint declarations, Executive agreements, and joint research programs – to strengthen tech cooperation at the bilateral level. In this context, the Joint U.S.-UK Declaration on Cooperation in AI Research and Development in September 2020 and the Atlantic Declaration in June 2023 were steps in the right direction.[36] Likewise, the U.S.-EU Digital Trade and Technology Council represents another forum through which the United States could pursue closer economic and technological cooperation with the EU and EU member states. Similar opportunities also exist for bilateral cooperation with Switzerland and Japan, which seek to adopt a flexible, light-touch approach to AI governance.[37] Establishing research partnerships – similar to Canada and the UK’s arrangements with Japan and the EU–could also help deepen U.S. technology cooperation with like-minded nations.

Ultimately, the U.S. needs to look beyond bilateral relationships and strengthen its multilateral engagement in global AI governance. Although the United States is part of several multilateral fora and institutions that are active in AI governance – such as the OECD and the Global Partnership on AI – the U.S. appears to punch below its weight in shaping AI norms through these organizations. By participating more actively in such organizations–-as has been the case with Japan and the UK’s multi-stakeholder, multilateralist approach to AI governance–-the U.S. government can more actively contribute to the development of international AI norms and technical standards.[38] 

The development of such norms could be particularly beneficial for emerging-market and developing countries, many of which lack a robust AI governance infrastructure and look to international institutions for the development of best practices in responsible AI. Along with like-minded partners – such as the EU, the UK, and Japan – the United States could play a more important role in establishing multilateral platforms for AI governance dialogues between the governments of industrialized and emerging-market countries. As officials and lawmakers in various jurisdictions seek to develop national AI strategies, the United States and partner countries can play a more important role in advocating a principles-based, innovation-focused AI approach that promotes economic growth and innovation while mitigating current and future AI risks.

Appendix

Table A1. Characteristics of the UK’s Pro-Innovation AI Framework

Characteristic

Description

Pro-innovation

Enabling rather than stifling responsible innovation.

Proportionate

Avoiding unnecessary or disproportionate burdens for businesses and regulators.

Trustworthy

Addressing real risks and fostering public trust in AI in order to promote and encourage its uptake.

Adaptable

Enabling us to adapt quickly and effectively to keep pace with emergent opportunities and risks as AI technologies evolve.

Clear

Making it easy for actors in the AI life cycle, including businesses using AI, to know what the rules are, who they apply to, who enforces them, and how to comply with them.

Collaborative

Encouraging government, regulators, and industry to work together to facilitate innovation, build trust and ensure that the voice of the public is heard and considered.

Source: DSTI and UK Office for AI (2023).[39] 

 

Table A2. Guidelines for Regulators for Applying the UK’s AI Framework

Scope

Description

Proportionate, context-specific, and flexible approach

Adopt a proportionate approach that promotes growth and innovation by focusing on the risks that AI poses in a particular context.

Prioritized risks and risk assessments

Consider proportionate measures to address prioritised risks, taking into account cross-cutting risk assessments undertaken by, or on behalf of, government.

Regulatory enforcement

Design, implement, and enforce appropriate regulatory requirements and, where possible, integrate delivery of the principles into existing monitoring, investigation, and enforcement processes.

Regulatory flexibility  

Enabling us to adapt quickly and effectively to keep pace with emergent opportunities and risks as AI technologies evolve.

Awareness and transparency

Making it easy for actors in the AI life cycle, including businesses using AI, to know what the rules are, who they apply to, who enforces them, and how to comply with them.

Collaboration and public trust

Encouraging government, regulators, and industry to work together to facilitate innovation, build trust and ensure that the voice of the public is heard and considered.

Source: DSTI and UK Office for AI (2023).[40] 

 

Table A3. Categories of AI Risks Under the European Union’s Proposed AI Act

Category and Requirement

Examples

Statutory Basis

Unacceptable risk: Prohibited

Social scoring, facial recognition, dark-pattern AI, and manipulation

Art. 5

High risk: Conformity assessment

Education, employment, justice, immigration, and law

Art. 6 & ss.

Limited risk: Transparency

Chatbots, deep fakes, and emotional recognitions

Art. 52

Minimal risk: Code of conduct

Spam filters and video games

Art. 69

Source: Lilian Edwards, Ada Lovelace Institute (2022)[41]

 

Table A4. Designing a U.S. Central Risk Function Mechanism for Artificial Intelligence Risks

Stakeholder

Identification*

Enforcement

Monitoring*

Congress and the Federal Government

i) Creates the U.S. AI framework to identify AI risks; ii) Decides which risks to tolerate, regulate, and prioritize.

Delegates the enforcement of the AI Framework to sectoral regulators.

Updates the statutory framework to address new risks if identified.

Central Risk Function Mechanism

i) Identify and prioritize new AI risks; ii) Provide recommendations if the new risks require government intervention.

i) Recommend which regulator(s) should address those risks; ii) Create overall risk assessment frameworks; iii) Provide advice to regulators on technical aspects of regulation; iv) Share AI regulatory best practices.  

Monitors risks and reports them to Congress and the Executive.  

Sectoral Regulators

i) Identify and prioritize sector-specific AI risks;

ii) Evaluate whether newly identified risks should be prioritized and addressed.

 

i) Create regulatory guidance for businesses based on the central risk function’s risk assessment framework; ii) Update regulatory guidelines and rules based on stakeholder feedback on how effectively they are working; iii) Enforce actions against companies for violations.

Reports on the effectiveness of addressing AI risks.

 

Businesses

Provide information to sectoral regulators and the central risk function, as necessary and appropriate.

 

Comply with regulatory guidance and rules and incorporate the risk assessment framework in internal practice.

 

Inform the relevant regulator(s) and the central risk function mechanism if risk mitigation measures fail to address the risks.

* The mechanisms highlighted in grey comprise a regulatory feedback loop between the federal government, sectoral regulators, the central risk function, and businesses subject to the AI framework to identify and mitigate emerging AI risks.

Source: Author based on DSTI and UK Office for AI (2023)[42] 

 

 


[1] Office of Science and Technology Policy (OSTP), “Request for Information: National Priorities for Artificial Intelligence,” Federal Register 88, no. 102 (May 26, 2023): 34194, https://www.federalregister.gov/documents/2023/05/26/2023-11346/request-for-information-national-priorities-for-artificial-intelligence

[2] European Commission, “Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts,” COM (2021) 206 final (April 21, 2021), https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=celex%3A52021PC0206.

[3] Patrick Glauner and Kai Zenner, “KI-Verordnung – Bärendienst für die heimischen KMU” [“AI Regulation - Disservice to Domestic SMEs”], Der Tagesspiel, April 19, 2023, https://background.tagesspiegel.de/digitalisierung/ki-verordnung-baerendienst-fuer-die-heimischen-kmu.

[4] KI-Bundesverband [German AI Association], “Positionspapier des KI-Bundesverband e.V. zur EU-Regulierung von Künstlicher Intelligenz” [“Position Paper of the German AI Association on the EU’s AI Act”], March 2021, https://ki-verband.de/wp-content/uploads/2022/02/KI_Regulierung_DE-komprimiert.pdf. Javier Espinoza, “European companies sound alarm over draft AI law,” Financial Times, June 30, 2023, https://www.ft.com/content/9b72a5f4-a6d8-41aa-95b8-c75f0bc92465.

[5] Benoit Berthelot, “Macron Calls for French AI Innovation as EU Votes to Regulate,” Bloomberg, June 14, 2023, https://www.bloomberg.com/news/articles/2023-06-14/macron-calls-for-french-ai-innovation-after-eu-votes-for-ai-act-restrictions. Bayerische Staatsregierung [Bavarian State Government], “Studie zu KI-Regulierung: EU-Regeln stellen Unternehmen vor große Hürden / Digitalministerin Gerlach: Innovation nicht durch Überregulierung ausbremsen” [“Study on AI Regulation: EU Rules Will Pose Major Hurdles for Companies/Digital Minister Gerlach: Do not slow down innovation through overregulation”], press release, March 28, 2023, https://www.bayern.de/studie-zu-ki-regulierung-eu-regeln-stellen-unternehmen-vor-grosse-huerden-digitalministerin-gerlach-innovation-nicht-durch-ueberregulierung-ausbremsen/.

[6] UK Department for Science, Technology, and Innovation (DSTI) and the Office for Artificial Intelligence, “A Pro-Innovation Approach to AI Regulation,” policy paper, updated June 22, 2023, https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach/white-paper.

[7] Hiroki Habuka, “Japan’s Approach to AI Regulation and Its Impact on the 2023 G7 Presidency,” Center for Strategic and International Studies, February 14, 2023, https://www.csis.org/analysis/japans-approach-ai-regulation-and-its-impact-2023-g7-presidency. Ryan Morrison, “Japan becomes latest country proposing hands-off AI regulation, but businesses ‘likely to follow EU rules,’” Tech Monitor, July 4, 2023, https://techmonitor.ai/technology/ai-and-automation/japan-ai-europe-regulation-artificial-intelligence.

[8] Ryan Nabil, “Consultation Response to the UK Office for Artificial Intelligence: Principles for a Pro-Innovation Approach to AI Governance,” National Taxpayers Union Foundation, June 21, 2023, https://www.ntu.org/foundation/detail/consultation-response-to-the-uk-office-for-artificial-intelligence-principles-for-a-pro-innovation-approach-to-ai-governance/

[9] Nabil, “UK Approach to AI Governance.” DSTI, “Pro-Innovation Approach to AI.”

[10] Ibid.

[11] Ibid. Note that the OECD’s principles are worded slightly differently: i) “inclusive growth, sustainable development, and well-being”; ii) “human-centred values and fairness”; iii) “transparency and explainability”; iv) “robustness, security and safety”; and v) “accountability”. Organisation for Economic Co-operation and Development, “OECD AI Principles Overview,” n.d., https://oecd.ai/en/ai-principles.

[12] Japanese Ministry of Economy, Trade, and Industry (METI), Expert Group on How AI Principles Should Be Implemented, “Governance Guidelines for Implementation of AI Principle,” January 28, 2022, https://www.meti.go.jp/shingikai/mono_info_service/ai_shakai_jisso/pdf/20220128_2.pdf.

[13] Nabil, “UK Approach to AI Governance.” DSTI, “Pro-Innovation Approach to AI.”

[14] Ibid.

[15] Ryan Nabil, “How Regulatory Sandbox Programs Can Promote Technological Innovation and Consumer Welfare: Insights from Federal and State Experience,” Competitive Enterprise Institute OnPoint, no. 281 (2022), https://cei.org/studies/how-regulatory-sandbox-programs-can-promote-technological-innovation-and-consumer-welfare/.

[16] DSTI, “Pro-Innovation Approach to AI.”

[17] Ibid.

[18] Lilian Edwards, “The EU AI Act: a summary of its significance and scope,” Ada Lovelace Institute, April 2022, https://www.adalovelaceinstitute.org/wp-content/uploads/2022/04/Expert-explainer-The-EU-AI-Act-11-April-2022.pdf.

[19] Dechert LLP, “European Commission’s Proposed Regulation on Artificial Intelligence: Conducting a Conformity Assessment for High-Risk AI - Say What?,” November 16, 2021, https://www.dechert.com/knowledge/onpoint/2021/11/european-commission-s-proposed-regulation-on-artificial-intellig.html.

[20] DSTI, “Pro-Innovation Approach to AI.”

[21] Ibid.

[22] Ryan Nabil, “The EU’s Recently Proposed Artificial Intelligence Act Goes Too Far,” The National Interest, August 21, 2021, https://nationalinterest.org/blog/buzz/eu’s-recently-proposed-artificial-intelligence-act%C2%A0goes-too-far-191733.

[23] DSTI, “Pro-Innovation Approach to AI.”

[24] Nabil, “UK Approach to AI Governance.”

[25] For examples of such concerns, see Anna Tong, “AI threatens humanity’s future, 61% of Americans say: Reuters/Ipsos poll,” Reuters, May 17, 2023, https://www.reuters.com/technology/ai-threatens-humanitys-future-61-americans-say-reutersipsos-2023-05-17/. Andrew Gregory and Alex Hern, “AI poses existential threat and risk to health of millions, experts warn,” The Guardian, May 9, 2023, https://www.theguardian.com/technology/2023/may/10/ai-poses-existential-threat-and-risk-to-health-of-millions-experts-warn.

[26] Ryan Nabil, “Strategies to Improve the National Artificial Intelligence Research and Development Strategic Plan,” Competitive Enterprise Institute OnPoint, no. 282 (2022), https://cei.org/studies/strategies-to-improve-the-national-artificial-intelligence-research-and-development-strategic-pla/

[27] Nabil, “How Regulatory Sandbox Programs Can Promote Innovation.”

[28] DSTI, “Pro-Innovation Approach to AI.”

[29] Nabil, “How Regulatory Sandbox Programs Can Promote Innovation.”

[30] Laura Galindo-Romero, Karine Perset, and Francesca Sheeka, “An overview of national AI strategies and policies,” Going Digital Toolkit Note, no. 14 (2021), https://goingdigital.oecd.org/data/notes/No14_ToolkitNote_AIStrategies.pdf.

[31] DSTI, “Pro-Innovation Approach to AI.”

[32] European Commission, “Proposal for AI Act.”

[33] Nabil, “UK Approach to AI Governance.”

[34] Nabil, “How Regulatory Sandbox Programs Can Promote Innovation.”

[35] Ibid.

[36] “The Atlantic Declaration: A framework for a twenty-first century US-UK Economic Partnership,” June 8, 2023, https://www.gov.uk/government/publications/the-atlantic-declaration. “Declaration of the United States of America and the United Kingdom of Great Britain and Northern Ireland on Cooperation in Artificial Intelligence Research and Development: A Shared Vision for Driving Technological Breakthroughs in Artificial Intelligence,” September 25, 2020, https://www.gov.uk/government/publications/declaration-of-the-united-states-of-america-and-the-united-kingdom-of-great-britain-and-northern-ireland-on-cooperation-in-ai-research-and-development.

[37] Staatssekretariat für Bildung, Forschung und Innovation [State Secretariat for Education, Research, and Innovation], “Herausforderungen der künstlichen Intelligenz: Bericht der interdeparementalen Arbeitsgruppe «Künstliche Intelligenz» an den Bundesrat” [“Challenges of Artificial Intelligence: Report of the Interdepartmental Working Group on Artificial Intelligence to the Federal Council”], December 2019, https://www.sbfi.admin.ch/sbfi/de/home/bfi-politik/bfi-2021-2024/transversale-themen/digitalisierung-bfi/kuenstliche-intelligenz.html. Der Bundesraat [The Federal Council], “Leitlinien «Künstliche Intelligenz» für den Bund: Orientierungsrahmen für den Umgang mit künstlicher Intelligenz in der Bundesverwaltung” [“Artificial Intelligence Guidelines for the Federal Government: Orientation Framework for Dealing with Artificial Intelligence in the Federal Administration”], November 2020, https://www.sbfi.admin.ch/sbfi/de/home/bfi-politik/bfi-2021-2024/transversale-themen/digitalisierung-bfi/kuenstliche-intelligenz.html. METI, “Governance Guidelines for Implementation of AI Principles.”

[38] METI, “Governance Guidelines for Implementation of AI Principles.”

[39] DSTI, “Pro-Innovation Approach to AI.”

[40] Ibid.

[41] Edwards, “The EU AI Act.”

[42] DSTI, “Pro-Innovation Approach to AI.”