Consultation Response to the UK Office for Artificial Intelligence: Principles for a Pro-Innovation Approach to AI Governance

(PDF)
Office for Artificial Intelligence
Department for Science, Innovation and Technology
100 Parliament St 
London SW1A 2BQ
evidence@officeforai.gov.uk
 
Re: Consultation Response to the Artificial Intelligence White Paper 
 
Ryan Nabil
Director and Senior Fellow, Technology Policy
National Taxpayers Union Foundation
122 C St NW, Washington, DC 
 
21 June 2023  
 
Introduction 
 
On behalf of National Taxpayers Union Foundation (NTUF), I welcome the opportunity to submit the following written evidence in response to Government’s AI governance consultation. Located in Washington, DC, National Taxpayers Union is the oldest taxpayer advocacy organisation in the United States. Its affiliated think-tank, NTUF, conducts evidence-based research on economic and technology policy issues of interest to taxpayers, including US and international approaches to data protection, artificial intelligence, and emerging technologies. 
 
NTUF appreciates Government’s intention to adopt a flexible, pragmatic approach to AI applications and its efforts to seek stakeholder input and expert comments through this consultation. By adopting an innovation-first approach to AI technologies, Government can set an example for AI governance on both sides of the Atlantic. 
 
The response to the specific questions in the Government’s consultation document are provided below.
 
I. Our revised AI principles
 
Question 1. Do you agree that requiring organisations to make it clear when they are using AI would improve transparency? 
 
Strongly agree. 
 
Question 2. Are there other measures we could require of organisations to improve transparency for AI?
 
Efforts led by the private sector to define and establish responsible AI norms could help promote transparency. To that end, Government should consider establishing working groups and other collaborative mechanisms to implement additional rules, if needed, on the recommendation of such groups. 
 
Question 3. Do you agree that current routes to contest or get redress for AI-related harms are adequate?
 
Somewhat agree.
 
Question 4. How could current routes to contest or seek redress for AI-related harms be improved, if at all?
 
Instead of creating a new legal route specifically for AI-related harms, existing statutory frameworks should be used (and if necessary, updated) to address such harms. Such an approach would create more consistent, technology-neutral harms rules for different emerging technologies, including AI, as they become more widely available. 
 
Question 5. Do you agree that, when implemented effectively, the revised cross-sectoral principles will cover the risks posed by AI technologies? Our principles are: safety, security and robustness; appropriate transparency and explainability; fairness; accountability and governance; contestability and redress.
 
Strongly agree. 
 
Question 6. What, if anything, is missing from the revised principles?
 
First, innovation should be introduced as a core AI principle, which would complement the Government’s objectives of becoming a global leader in AI innovation. 
 
Second, while algorithmic transparency and explainability are fundamental in many contexts (e.g., the use of AI tools by law enforcement), Government must ensure that these requirements do not weaken the effectiveness of AI systems in other promising areas, such as in encryption and medical diagnostics.  
 
II. A statutory duty to have due regard to the principles.
 
Question 7. Do you agree that introducing a statutory duty on regulators to have due regard to the principles would clarify and strengthen regulators’ mandates to implement our principles while retaining a flexible approach to implementation?
 
Strongly agree.
 
Question 8. Is there an alternative statutory intervention that would be more effective?
 
Government should consider adding innovation as a statutory duty for regulating AI governance. Statutory duty about algorithmic explainability should be clarified to ensure that it does not prevent the development of effective AI systems in less sensitive AI applications. 
 
III. New central functions
 
Question 9. Do you agree that the functions outlined in section 3.3.1 would benefit our AI regulation framework if delivered centrally?
 
Central FunctionResponse
Monitoring and evaluating the framework as a whole. Strongly agree
Assessing and monitoring cross-economy risks arising from the use of AI. Strongly agree
Scanning for future trends and analysing knowledge gaps to inform our response to emerging AI. Strongly agree
Supporting AI innovators to get new technologies to market. Strongly agree

Question 10. What, if anything, is missing from the central functions?

First, central functions should include mechanisms to monitor legislative and regulatory activities in adjacent tech-related areas (e.g., privacy and competition policy) to evaluate how such developments might affect AI innovation. 

Second, mechanisms to monitor and evaluate the relative successes and failures of AI regulatory approaches of other jurisdictions could help the Government calibrate the UK’s AI strategy if needed. 
 
Third, Government should introduce mechanisms to monitor potential issues related to the creation and joint supervision of AI regulatory sandbox programmes by multiple regulators. 
 
Question 11. Do you know of any existing organisations who should deliver one or more of our proposed central functions?
 
Instead of a single organisation, groups comprising experts from different sectors and organisations would be a better approach for delivering central functions. For instance, an international working group composed of policy, legal, and technical experts could help identify trends in global AI governance and suggest ways to promote international alignment with like-minded foreign jurisdictions.  
 
Question 12. Are there additional activities that would help businesses confidently innovate and use AI technologies?
 
Government should work with the private sector to develop and publish clear guidelines for industry-specific AI best practices. To that end, Government should consider developing additional mechanisms, such as innovation hubs and regulatory fora, to help companies understand the legal requirements associated with offering AI-enabled products in the respective sectors. 
 
Question 13. Are there additional activities that would help individuals and consumers confidently use AI technologies?
 
Greater transparency about AI applications and potential redress mechanisms could help promote greater consumer trust. Companies providing consumers information about how AI and their data are used could be particularly helpful, especially if the legal content of such notices are summarised in accessible language. 
 
Question 14. How can we avoid overlapping, duplicative or contradictory guidance on AI issued by different regulators?
 
Providing a statutory basis for AI principles and creating frameworks for regulatory coordination could help ensure that regulators do not fundamentally diverge in their approach to AI. In the case of such divergence, an existing or newly created body could be granted powers to review contradictory guidelines and help resolve the issue.
 
IV. Monitoring and evaluation of the framework
 
Question 15. Do you agree with our overall approach to monitoring and evaluation?
 
Strongly agree. 
 
Question 16. What is the best way to measure the impact of our framework?
 
While it will be challenging to isolate the impact of this proposed framework on broader AI policy objectives, well-designed econometric studies could help measure its effects on consumer welfare, public trust, and technological innovation. Furthermore, analysing commercial and regulatory outcomes and consumer behaviour through the AI sandbox could allow Government to observe the framework’s impact on businesses and consumers in a more granular manner. 
 
Question 17. Do you agree that our approach strikes the right balance between supporting AI innovation; addressing known, prioritised risks; and future-proofing the AI regulation framework?
 
Strongly agree. 
 
Question 18. Do you agree that regulators are best placed to apply the principles and government is best placed to provide oversight and deliver central functions?
 
Yes.
 
V. Regulator capability
 
Question 20. Do you agree that a pooled team of AI experts would be the most effective way to address capability gaps and help regulators apply the principles?
 
Strongly agree. 
 
VI. Tools for trustworthy AI
 
Question 21. Which non-regulatory tools for trustworthy AI would most help organisations to embed the AI regulation principles into existing business processes?
 
Enabling private sector entities to develop guidelines and norms for developing and using AI could help businesses incorporate AI principles into sector-specific business processes and models. To that end, encouraging the development of private sector mechanisms to evaluate and improve trustworthy AI practices could promote responsible AI use without substantial governmental involvement. 
 
VII. Final thoughts on the framework
 
Question 22. Do you have any other thoughts on our overall approach? Please include any missed opportunities, flaws, and gaps in our framework.
 
Government should consider creating a joint reciprocal sandbox with like-minded jurisdictions such as the US and the EU, which would allow UK sandbox participants to offer their AI products overseas and vice versa. 
 
To that end, Government should consider creating US-UK and UK-EU AI policy working groups—composed of experts from UK and foreign think tanks, universities, and the private sector—to advise Government on pursuing closer transatlantic AI cooperation. 
 
VIII. Legal responsibility for AI
 
Question L1. What challenges might arise when regulators apply the principles across different AI applications and systems? How could we address these challenges through our proposed AI regulatory framework?
 
Regulators would need to adjust their priorities as different AI systems are likely to be associated with different ordering of AI principles (fairness is a much more important consideration for AI use in law enforcement situations than for AI-enabled music and video streaming services). 
 
To better understand such challenges, Government should create multiple sectoral sandboxes. Such an arrangement would help policymakers better understand how AI applications and business models interact with different sector-specific legal frameworks and update them if needed.
 
Question L2. i. Do you agree that the implementation of our principles through existing legal frameworks will fairly and effectively allocate legal responsibility for AI across the life cycle?
 
Somewhat agree.
 
Question L2. ii. How could it be improved, if at all?
 
No response. 
 
Question L3. If you work for a business that develops, uses, or sells AI, how do you currently manage AI risk including through the wider supply chain? How could government support effective AI-related risk management?
 
No response.
 
IX. Artificial intelligence sandboxes and testbeds
 
Question S1. To what extent would the sandbox models described in section 3.3.4 support innovation?
 
Sandbox ModelDefinitionResponse
Single sector, single regulator“[S]upport innovators to bring AI products to the market in collaboration with a single regulator, focusing on only one chosen industry sector”. Somewhat support innovation
Multiple industry sectors, single regulator “[S]upport AI innovators in collaboration with a single regulator that is capable of working across multiple sectors”. Somewhat prevent innovation 
Single sector, multiple regulator“[E]stablish a sandbox that operates in only one industry sector, but is capable of supporting AI innovators who path to market requires interaction with one or more regulators operating in that sector”. Somewhat support innovation
Multiple sectors, multiple regulator“[A] sandbox capable of operating with one or more regulators in one or more industry sectors to help AI innovators reach their target market. The DRCF is piloting a version of this model”. Strongly support innovation 

 

Question S2. What could government do to maximise the benefit of sandboxes to AI innovators?

Government should consider the creation of a joint reciprocal sandbox with the US and the EU, which would allow UK sandbox participants to offer the sandbox program in participating jurisdictions and vice versa. A reciprocal sandbox could help attract innovative foreign start-ups who apply to the sandbox to enter the UK and European markets. 

Question S3. What could government do to facilitate participation in an AI regulatory sandbox?

Lowering entry barriers and making the sandbox open to foreign startups and companies would help it easier for innovative international startups to join the AI sandbox. 

Furthermore, Government should consider launching an innovation hub, which would provide information on opportunities and legal requirements for launching AI startups in the UK and advise potential applicants on applying to the sandbox programme. 

Question S4. Which of the following industry sectors do you believe would most benefit from an AI sandbox?

The following industry sectors are likely to benefit the most from an AI sandbox: 

  1. Financial services and insurance
  2. Communications 
  3. Information technology 
  4. Legal services 
  5. Transportation 
  6. Healthcare 
  7. Education 
  8. Public sector 
  9. Artificial intelligence, digital and technology 
  10. Regulation