Governance of Artificial Intelligence UK Parliament 2024

Download as pdf or txt
Download as pdf or txt
You are on page 1of 70

House of Commons

Science, Innovation and


Technology Committee

Governance of artificial
intelligence (AI)
Third Report of Session 2023–24

Report, together with formal minutes relating


to the report

Ordered by the House of Commons


to be printed 23 May 2024

HC 38
Published on 28 May 2024
by authority of the House of Commons
Science, Innovation and Technology Committee
The Science, Innovation and Technology Select Committee is appointed by the
House of Commons to examine the expenditure, administration and policy of the
Department for Science, Innovation and Technology, and associated public bodies.
It also exists to ensure that Government policies and decision-making across
departments are based on solid scientific evidence and advice.
Current membership

Rt Hon Greg Clark MP (Conservative, Tunbridge Wells) (Chair)


Dawn Butler MP (Labour, Brent Central)
Chris Clarkson MP (Conservative, Heywood and Middleton)
Dame Tracey Crouch MP (Conservative, Chatham and Aylesford)
Dr James Davies MP (Conservative, Vale of Clwyd)
Katherine Fletcher MP (Conservative, South Ribble)
Rebecca Long-Bailey MP (Labour, Salford and Eccles)
Stephen Metcalfe MP (Conservative, South Basildon and East Thurrock)
Carol Monaghan MP (Scottish National Party, Glasgow North West)
Graham Stringer MP (Labour, Blackley and Broughton)
Christian Wakeford MP (Labour, Bury South)
Powers

The Committee is one of the departmental select committees, the powers of which
are set out in House of Commons Standing Orders, principally in SO. No. 152. These
are available on the internet via www.parliament.uk.
Publication

© Parliamentary Copyright House of Commons 2024. This publication may be


reproduced under the terms of the Open Parliament Licence, which is published at
www.parliament.uk/site-information/copyright-parliament.
Committee reports are published on the Committee’s website at
www.parliament.uk/science and in print by Order of the House.
Committee staff

The current staff of the Committee are: Ethan Attwood (POST Fellow), Jessica
Bridges-Palmer (Senior Select Committee Media Officer), Martha Comerford
(Digital Account Manager), Ian Cruse (Second Clerk), Stella-Maria Gabriel
(Committee Operations Manager), Arvind Gunnoo (Committee Operations Officer),
Dr Faten Hussein (Committee Team Leader (Clerk)), Dr Misha Patel (Committee
Specialist), Dr Joshua Pike (Committee Specialist), and Ben Shave (Committee
Specialist)
The following staff also worked for the Committee during this inquiry:
Gina Degtyareva (Former Senior Select Committee Media Officer), Dr Claire
Housley (Former Committee Specialist), Dr Claire Kanja (Committee Specialist),
Hafsa Saeed (Former Committee Operations Manager)
Contacts

All correspondence should be addressed to the Clerk of the Science, Innovation and
Technology Committee, House of Commons, London, SW1A 0AA. The telephone
number for general inquiries is: 020 7219 2793; the Committee’s e-mail address is:
commonssitc@parliament.uk.
You can follow the Committee on X (formerly Twitter) using @CommonsSITC.
Governance of artificial intelligence (AI) 1

Contents
Summary 3

1 Introduction 7
Our inquiry 7
Aims of this Report 8

2 The case for AI 9


The deployment phase 9
Pros and cons 9
Energy consumption 10
Making the case? 11

3 AI-specific legislation 12
A principles-based approach 12
AI-specific legislation 12

4 The role of regulators 15


Powers 15
Coordination 16
Resourcing 17

5 AI in the public and private sectors 20


Public sector 20
i.AI 20
Public sector productivity programme 20
Ministerial and Cabinet groups 21
Encouraging adoption 21
Private sector 22

6 The AI Safety Institute 24


From Taskforce to Institute 24
The AI Safety Summit 24
What was achieved? 25
Future priorities 26

7 The international dimension 29


The United States 29
Voluntary commitments 29
Executive Order 30
State-level initiatives 30
The EU AI Act 31
A risk-based approach 31
Implementation, enforcement and exemptions 33
China 35
International standards 35

8 Twelve Challenges of AI Governance revisited 38


1: The Bias Challenge 38
2: The Privacy Challenge 39
3: The Misrepresentation Challenge 40
4: The Access to Data Challenge 41
5: The Access to Compute Challenge 43
6: The Black Box Challenge 43
7: The Open-Source Challenge 44
8: The Intellectual Property and Copyright Challenge 46
9: The Liability Challenge 47
10: The Employment Challenge 47
11: The International Coordination Challenge 48
12: The Existential Challenge 50

Conclusions and recommendations 52

Formal minutes 60

Witnesses 61

Published written evidence 63

List of Reports from the Committee during the current Parliament 67


Governance of artificial intelligence (AI) 3

Summary
Since the publication of our interim Report examining the governance of artificial
intelligence (AI) in August 2023, debates over how to regulate the development and
deployment of AI have continued. These debates have often centred around the Twelve
Challenges of AI Governance we identified in our interim Report.

This Report examines domestic and international developments in the governance and
regulation of AI since the publication of our interim Report. It also revisits the Twelve
Challenges of AI Governance we identified in our interim Report and suggests how they
might be addressed by policymakers. Our conclusions and recommendations apply to
whoever is in Government after the General Election.

We have sought to reflect the uncertainty that exists over many questions that are
critical to the future shape of the UK’s AI governance framework: how the technology
will develop, what the consequences will be of its increased deployment, whether as-yet
hypothetical risks will be realised, and how policy can best keep pace with the rate of
development in these and other areas.

These questions need to be answered over the longer-term.

Perhaps the most far-reaching of the challenges that AI poses is how to deal with a
technology which—in at least some of its variants—operates as a ‘black box’. That is to
say, the basis of and reasoning for its recommendations may be strictly unknowable.
Most of public policy—and the scientific method—is based on being able to observe
and validate the reasons why a particular decision is made and to test transparently
the soundness (or the ethics) of the connections that lead to a conclusion. In neural
networks-based AI that may not be possible, but the predictive power of models may
nevertheless be very strong. A so-called ‘human in the loop’ may be unequal to the
power and complexity of the AI model. In our recommendations we emphasise a greater
role for testing of outputs of such models as a means to assess their power and acuity.

In the short term, it is important that the UK Government works to increase the level
of public trust in AI—a technology that has already become a ubiquitous part of our
everyday lives. If this public trust can be secured, we believe that AI can deliver on its
significant promise, to complement and augment human activity.

The Government has articulated the case for AI: better public services, high quality
jobs and a new era of economic growth driven by advances in AI capabilities. It has
confirmed its intention to pursue the principles-based approach proposed in its March
2023 AI White Paper and examined in our interim Report. Five high-level principles—
safety, security and robustness; appropriate transparency and explainability; fairness;
accountability and governance; and contestability and redress—underpin the
Government’s approach and have begun to be translated into sector-specific action by
regulators.

A key theme of our Inquiry has been whether the Government should bring forward
AI-specific legislation. Resolving this should be a priority for the next administration.
We believe that the next Government should be ready to introduce new AI-specific
4 Governance of artificial intelligence (AI)

legislation, should the current approach based on regulators’ existing powers and
voluntary commitments by leading developers prove insufficient to address current and
potential future harms associated with the technology.

The success of the UK’s approach to AI governance will be determined to a significant


extent by the ability of our sectoral regulators to put the Government’s high-level
principles into practice as AI continues to develop at pace. We have identified three
factors that will influence their ability to deliver: powers, coordination and resourcing.

On powers, we welcome confirmation that the Government will undertake a regulatory


gap analysis to determine whether regulators require new powers to respond properly
to the growing use of AI, as recommended in our interim Report. Concluding this
analysis and implementing its findings must be a priority for the next Government.

On coordination, the general-purpose nature of AI will, in some instances, involve


overlapping regulatory remits, and a possible lack of clarity of the responsibility of
different regulators. This could create confusion on the part of consumers, developers
and deployers of the technology, as well as regulators themselves. The central steering
committee that the Government has said it will establish should be empowered to
provide guidance and, where necessary, direction to help regulators navigate any
overlapping remits, whilst respecting their independence. The regulatory gap analysis
should also put forward suggestions for delivering this coordination, including joint
investigations, a streamlined process for regulatory referrals, and enhanced levels of
information sharing.

On resourcing, the capacity of regulators is a concern. Ofcom, for example, is combining


implementing a broad new suite of powers conferred on it by the Online Safety Act
2023, with formulating a comprehensive response to the deployment of AI across its
regulatory ambit. Others will be required to undertake resource-intensive investigations
and it is vital that they have both the powers and resources to do so. We believe that the
announced £10 million to support regulators in responding to the growing prevalence
of AI is clearly insufficient to meet the challenge, particularly when compared to even
the UK-only revenues of leading AI developers.

The AI Safety Institute, established in its current form following the AI Safety Summit
at Bletchley Park in November 2023, is another key element of the UK’s AI governance
framework. The Institute’s leadership has assembled an impressive and growing team
of researchers and technical experts recruited from leading developers and academic
institutions, helped shape a global dialogue on AI safety, and—whilst not a regulator—
has played a decisive role in shaping the UK’s regulatory approach to AI. However,
the reported challenges the Institute has experienced with securing access to leading
developers’ future models to undertake pre-deployment safety testing is, if accurate,
a major concern. Whilst testing on already-available models is clearly a worthwhile
undertaking, the release of future models without the promised independent assessment
would undermine the achievement of the Institute’s mission and its ability to secure
public trust in the technology.

While international conversations about AI safety have generated a degree of consensus—


and provided a notable point of engagement with China—there is not an emerging
international standard on regulation. The UK has pursued a principles-based approach
Governance of artificial intelligence (AI) 5

that works through existing sector regulators. The Biden-Harris administration in the
United States has through its Executive Order issued greater direction to federal bodies
and Government departments. The European Union, meanwhile, has agreed its AI Act,
which takes a ‘horizontal’, risk-based approach, with AI uses categorised into four levels
of risk, and specific requirements for general-purpose AI models. The AI Act will enter
into force in phases between now and mid-2026.

Both the US and EU approaches to AI governance have their downsides. The scope
of the former imposes requirements only on federal bodies and relies on voluntary
commitments from developers. The latter has been criticised for a top-down, prescriptive
approach and the potential for uneven implementation across different member states.
The UK is entitled to pursue its own, distinct approach that draws on our track record
of regulatory innovation and the biggest cluster of AI developers outside the US and
China.

Among the areas where learnings from elsewhere could be applied are in formulating
responses to the Twelve Challenges of AI Governance proposed in our interim Report.
We believe that all of these governance challenges still apply. We have proposed solutions
to each of them in this Report to demonstrate what policy makers in Government
should be doing.

These should not be viewed as definitive solutions to the challenges, but as provisional
illustrations of what policy might be in a complex, rapidly developing area. They are
summarised below.

1. The Bias Challenge. Developers and deployers of AI models and tools must not
merely acknowledge the presence of inherent bias in datasets, they must take steps to
mitigate its effects.

2. The Privacy Challenge. Privacy and data protection frameworks must account for
the increasing capability and prevalence of AI models and tools, and ensure the right
balance is struck.

3. The Misrepresentation Challenge. Those who use AI to misrepresent others, or allow


such misrepresentation to take place unchallenged, must be held accountable.

4. The Access to Data Challenge. Access to data, and the responsible management of
it, are prerequisites for a healthy, competitive and innovative AI industry and research
ecosystem.

5. The Access to Compute Challenge. Democratising and widening access to compute


is a prerequisite for a healthy, competitive and innovative AI industry and research
ecosystem.

6. The Black Box Challenge. We should accept that the workings of some AI models are
and will remain unexplainable and focus instead on interrogating and verifying their
outputs.

7. The Open-Source Challenge. The question should not be ‘open’ or ‘closed’, but rather
whether there is a sufficiently diverse and competitive market to support the growing
demand for AI models and tools.
6 Governance of artificial intelligence (AI)

8. The Intellectual Property and Copyright Challenge. The Government should


broker a fair, sustainable solution based around a licensing framework governing the
use of copyrighted material to train AI models.

9. The Liability Challenge. Determining liability for AI-related harms is not just a
matter for the courts—Government and regulators can play a role too.

10. The Employment Challenge. Education is the primary tool for policymakers to
respond to the growing prevalence of AI, and to ensure workers can ask the right
questions of the technology.

11. The International Coordination Challenge. A global governance regime for AI


may not be realistic nor desirable, even if there are economic and security benefits to be
won from international co-operation.

12. The Existential Challenge. Existential AI risk may not be an immediate concern
but it should not be ignored, even if policy and regulatory activity should primarily
focus on the here and now.
Governance of artificial intelligence (AI) 7

1 Introduction
1. Since the publication of our interim Report examining the governance of artificial
intelligence (AI) in August 2023, debates over how to regulate the development and
deployment of AI have continued. These debates have often centred around the twelve
challenges of AI governance we identified in our interim Report.¹

2. Jurisdictions including the UK,² European Union³ and United States⁴ have begun
to establish regulatory regimes to govern—to varying degrees and using different
approaches—the development and deployment of AI, ahead of the anticipated launch of
new, more advanced models during the months ahead.⁵

3. Whilst an international consensus around AI governance has not yet been reached,
the topic has risen up the agenda of various international fora, including the G7,⁶ United
Nations⁷ and at the AI Safety Summit organised by the UK Government at Bletchley Park
in November 2023.⁸

4. With AI’s growing ubiquity, the capability of existing models, and the ongoing
conversation about how best to regulate the development and deployment of the
technology, there is much to reflect on in our second Report.

Our Inquiry
5. We launched our Inquiry on 20 October 2022, to examine the impact of AI on
different areas of society and the economy; whether and how AI and its different uses
should be regulated; and the Government’s AI governance proposals.⁹ We have received
and published over 100 written submissions and taken oral evidence from 33 witnesses,
including Government Ministers and officials, AI researchers, businesses, civil society,
and professionals affected by the technology.

6. In August 2023 we published an interim Report,¹⁰ to which the Government


responded in November of that year.¹¹ We have also visited the United States, where we
met with representatives from public bodies, private companies and research institutions
in Boston and Washington, D. C.; and the European Union institutions in Brussels. We
are grateful to everyone who has contributed to our Inquiry.

1 Science, Innovation and Technology Committee, Ninth Report of Session 2022–23, The Governance of Artificial
Intelligence: interim report, HC 1769, summary
2 A pro-innovation approach to AI regulation, CP 815, Department for Science, Innovation and Technology, 29
March 2023
3 Artificial intelligence act: Council and Parliament strike a deal on the first rules for AI in the world, Council of
the European Union, 9 December 2023
4 Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, The White
House, 30 October 2023
5 Frontier AI Taskforce: second progress report, GOV.UK, 30 October 2023
6 G7 nations to harness AI and innovation to drive growth and productivity, GOV.UK, 15 March 2024
7 Interim Report: Governing AI for Humanity, United Nations AI Advisory Body, 22 December 2023
8 Countries agree to safe and responsible development of frontier AI in landmark Bletchley Declaration, GOV.UK,
1 November 2023
9 MPs to examine regulating AI in new inquiry, UK Parliament, 20 October 2022
10 Science, Innovation and Technology Committee, Ninth Report of Session 2022–23, The Governance of Artificial
Intelligence: interim report, HC 1769
11 Science, Innovation and Technology Committee, First Special Report of Session 2023–24, The governance of
artificial intelligence: interim report: Government response to the Committee’s Ninth report, HC 248
8 Governance of artificial intelligence (AI)

Aims of this Report


7. This Report examines domestic and international developments in the governance
and regulation of AI since the publication of our interim Report. It also revisits the Twelve
Challenges of AI Governance we identified in our interim Report and suggests how they
might be addressed by policymakers.

• In Chapter 2, we consider the case for AI.

• In Chapter 3, we examine the likelihood of AI-specific regulation in the UK.

• In Chapter 4, we assess the role of regulators in the UK’s AI governance


framework.

• In Chapter 5, we examine the deployment of AI in the public and private sectors.

• In Chapter 6, we consider the role of the UK AI Safety Institute.

• In Chapter 7, we assess the regulatory approaches to AI taken by the European


Union and the United States.

• Finally, in Chapter 8, we revisit our Twelve Challenges of AI Governance and


offer some potential solutions for policymakers.

8. With a General Election approaching we have sought to make this Report


futureproof and believe that our conclusions and recommendations will remain
applicable to future Administrations. It is important that the timing of the General
Election does not stall necessary efforts by the Government, developers and deployers of
AI to increase the level of public trust in a technology that has become a central part of
our everyday lives.
Governance of artificial intelligence (AI) 9

2 The case for AI


9. As AI has become an increasingly ubiquitous, general-purpose technology, debates
over the societal and economic implications have continued. In this Chapter, we will
examine the increasingly widespread deployment of AI, the positive and negative
consequences, and the ‘case in favour’ of the technology.

The deployment phase


10. In our interim Report, published in August 2023, we examined the potential benefits
of the deployment of AI models and tools in healthcare provision, medical research and
education.¹² Since then, we have seen AI used across a growing number of societal and
economic activities—as one analysis has observed, 2024 has been the deployment phase
for AI.¹³

11. The UK Government has consistently emphasised the benefits associated with the
deployment of AI in the public and private sectors, a topic we will return to in Chapter 5.
The Secretary of State for Science, Innovation and Technology, Rt. Hon. Michelle Donelan
MP, described AI to us as “… a foundational technology that interlinks with all the other
technologies”.¹⁴

Pros and cons


12. The Secretary of State’s depiction of AI as a foundational technology has been borne
out in the expanding range of sectors where it has improved existing processes and either
offered or already delivered tangible productivity gains, by augmenting and assisting
skilled workers.¹⁵

13. One sector highlighted to us as well-placed to capitalise on the benefits associated with
the deployment of AI is financial services, a key driver of growth for the UK economy.¹⁶
Nikhil Rathi, Chief Executive of the Financial Conduct Authority (FCA), has said that
whilst not new, “AI in markets today brings models incorporating deep learning and
neural networks capable of analysing large datasets and highlighting intricate patterns”,
which facilitate “… synchronised, automated order placements”.¹⁷

14. Cyber security is another sector where the analytical capabilities of the technology
have been deployed. NCC Group, a cyber security firm, told us that AI was “… used by
cyber defenders to analyse large data sets at scale, support threat intelligence and mimic

12 Science, Innovation and Technology Committee, Ninth Report of Session 2022–23, The Governance of Artificial
Intelligence: interim report, HC 1769, Chapter 3
13 AI poised to begin shifting from ‘excitement’ to ‘deployment’ in 2024, Goldman Sachs, 17 November 2023
14 Q771
15 State of AI Report, Air Street Capital, 12 October 2023
16 State of the sector: annual review of UK financial services 2023, City of London Corporation and HM Treasury,
4 July 2023
17 Collaborate to compete: why we must all embrace a growth mindset, Financial Conduct Authority, 18 October
2023
10 Governance of artificial intelligence (AI)

the behaviours of cyber attackers, so that organisations can understand and prepare for
potential attacks”.¹⁸ Leading AI developers have built generative AI models that can assist
and augment the work undertaken by human cyber security analysts.¹⁹

15. However, these benefits cannot be realised without incurring risks. The FCA told
us that “AI’s potential for autonomous decision-making brings with it potentially serious
challenges to governance processes, because it puts in to question the ownership and
responsibility for decision making”.²⁰ It also said that the increasing complexity of AI
models “… will require a greater focus on testing, validation and explainability… built on
strong accountability principles”.²¹

16. In the cyber security field, NCC Group told us that in addition to the benefits of AI, it
was “… lowering the barrier of entry into cybercrime, making it easier for cyber attackers
to successfully target victims and widening the availability of voice cloning, deep fakes
and social engineering bots”.²² The National Cyber Security Centre, part of GCHQ, has
said that “AI is already being used in malicious cyber activity and will almost certainly
increase the volume and impact of cyber attacks—including ransomware—in the near
term”.²³

Energy consumption
17. Since the publication of our interim Report, increased attention has been paid to the
environmental impact of the development and use of AI models and tools, particularly
their electricity and water consumption.²⁴ Researchers at the University of California,
Riverside have estimated that running 10–50 ‘inferences’—or queries—using OpenAI’s
GPT-3 model can equate to consuming “… 500 millilitres of water, depending on when
and where the model is hosted. GPT-4, the model currently used by ChatGPT, reportedly
has a much larger size and hence likely consumes more water than GPT-3”.²⁵

18. In its 2024 environmental sustainability report, Microsoft confirmed that its overall
carbon emissions had risen by 29.1% since 2020. It attributed this to “… the construction
of more datacenters and the associated embodied carbon in building materials, as well as
hardware components such as semiconductors, servers, and racks”.²⁶ Whilst Microsoft
and other leading developers such as Google have set targets to reduce their emissions
and energy consumption by the end of the decade,²⁷ it is nevertheless notable that Sam
Altman, CEO of OpenAI, said in January 2024 that “we still don’t appreciate the energy
needs of this technology”.²⁸

18 NCC Group (CYB0008)


19 Is artificial intelligence the solution to cyber security threats?, Financial Times, 16 January 2024
20 Financial Conduct Authority (GAI0125)
21 Financial Conduct Authority (GAI0125)
22 NCC Group (CYB0008)
23 The near-term impact of AI on the cyber threat, National Cyber Security Centre, 24 January 2024
24 How much electricity does AI consume? The Verge, 16 February 2024;
25 How much water does AI consume? The public deserves to know, OECD.AI, 30 November 2023
26 Microsoft 2024 Environmental Sustainability Report, Microsoft, 15 May 2024, p. 5
27 Microsoft’s emissions jump almost 30% as it races to meet AI demand, Financial Times, 15 May 2024; Net-zero
carbon, Google, accessed 23 May 2024
28 Sam Altman says the future of AI depends on breakthroughs in clean energy, The Verge, 19 January 2024
Governance of artificial intelligence (AI) 11

Making the case?


19. Government communications have referred to both the positive and negative
consequences associated with the increasing deployment of AI. The Prime Minister, Rt.
Hon. Rishi Sunak MP, has said that “… the more we learn about frontier technologies
like AI, the more they widen our horizons… the possibilities are extraordinary”.²⁹ The
Secretary of State for Science, Innovation and Technology similarly described improved
AI capabilities as a “… once-in-a-generation opportunity for the British people to
revolutionise our public services for the better and to deliver real, tangible, long-term
results for our country”.³⁰

20. At the same time, the Government response to its AI White Paper consultation
detailed three categories of AI-related risk—societal harms, misuse risk, and autonomy
risk³¹—and an AI Safety Institute has been established in order to “… minimise surprise
to the UK and humanity from rapid and unexpected advances… by developing the
sociotechnical infrastructure needed to understand the risks of advanced AI and support
its governance”.³² We will examine the Government’s approach to AI governance, the
deployment of AI in the public and private sectors and the role of the AI Safety Institute
in Chapters 3, 4, 5 and 6 of this Report.

21. Our interim Report identified Twelve Challenges of AI governance that we said could
potentially complicate the process of ensuring that policy could deliver the beneficial
consequences of AI whilst also safeguarding the public interest and preventing known
potential harms, both societal and individual.³³ We will offer some potential solutions to
these Challenges in Chapter 8 of this Report.

22. If governed appropriately, we believe that AI can deliver on its significant promise,
to complement and augment human activity. The Government has articulated the case
for AI: better public services, high quality jobs and a new era of economic growth
driven by advances in AI capabilities.

23. The Government is right to emphasise the potential societal and economic benefits
to be won from the strategic deployment of AI. However, as our interim Report
highlighted, the challenges are as clear as the potential benefits, and these benefits
cannot be realised without public trust in the technology.

24. The Government should certainly make the case for AI but should equally ensure
that its regulatory framework addresses the Twelve Challenges of AI Governance that
we have identified in our interim Report; and offer potential solutions to in this Report.

29 PM speech at London Tech Week, GOV.UK, 12 June 2023


30 A pro-innovation approach to AI regulation: Government response to consultation, CP 1019, Department for
Science, Innovation and Technology, 6 February 2024, p. 3
31 A pro-innovation approach to AI regulation: Government response to consultation, CP 1019, Department for
Science, Innovation and Technology, 6 February 2024, p. 17
32 Frontier AI Taskforce: second progress report, Department for Science, Innovation and Technology, 30 October
2023
33 Science, Innovation and Technology Committee, Ninth Report of Session 2022–23, The Governance of Artificial
Intelligence: interim report, HC 1769, summary
12 Governance of artificial intelligence (AI)

3 AI-specific legislation
25. In February 2024 the Government set out further details of its regulatory approach
to AI in the form of its response to a consultation on its AI White Paper,³⁴ which was
published in March 2023.³⁵ In this Chapter we will examine the consultation response
and the likelihood of AI-specific legislation in the UK following the General Election.

A principles-based approach
26. The consultation response confirmed that the Government would pursue the
principles-based approach it proposed in its March 2023 AI White Paper and examined
in our interim Report.³⁶ Five high-level principles—”safety, security and robustness;
appropriate transparency and explainability; fairness; accountability and governance;
and contestability and redress”³⁷—underpinned the Government’s approach, which it
has described as a combination of “… cross-sectoral principles and a context-specific
framework, international leadership and collaboration, and voluntary measures on
developers”.³⁸

27. The Government has said that its intended framework has been developed in order
to avoid “… unnecessary blanket rules that apply to all AI technologies, regardless of
how they are used. This is the best way to ensure an agile approach that stands the test of
time”.³⁹ Existing regulators have been asked to implement the five high-level principles in
their respective sectors.⁴⁰

AI-specific legislation
28. A key question for our Inquiry has been whether the Government should bring
forward AI-specific legislation. Our interim Report pointed out that the period leading up
to the General Election would be the final opportunity to enact legislation before “… late
2025—more than two years from now and nearly three years from the publication of the
[AI] White Paper”.⁴¹

29. In the March 2023 AI White Paper the Government said that it anticipated legislating,
at a minimum, to establish ‘due regard’ duties for existing regulators in relation to its five

34 A pro-innovation approach to AI regulation: Government response to consultation, CP 1019, Department for


Science, Innovation and Technology, 6 February 2024
35 A pro-innovation approach to AI regulation, CP 815, Department for Science, Innovation and Technology,
29 March 2023
36 Science, Innovation and Technology Committee, Ninth Report of Session 2022–23, The Governance of Artificial
Intelligence: interim report, HC 1769, Chapter 5
37 A pro-innovation approach to AI regulation: Government response to consultation, CP 1019, Department for
Science, Innovation and Technology, 6 February 2024, p. 13
38 A pro-innovation approach to AI regulation: Government response to consultation, CP 1019, Department for
Science, Innovation and Technology, 6 February 2024, p. 7
39 A pro-innovation approach to AI regulation: Government response to consultation, CP 1019, Department for
Science, Innovation and Technology, 6 February 2024, p. 13
40 A pro-innovation approach to AI regulation, CP 815, Department for Science, Innovation and Technology,
29 March 2023, p. 6
41 Science, Innovation and Technology Committee, Ninth Report of Session 2022–23, The Governance of Artificial
Intelligence: interim report, HC 1769, para 105
Governance of artificial intelligence (AI) 13

high-level principles.⁴² Our interim Report said that “… [this] commitment alone—in
addition to any further requirements that may emerge—suggests that there should be a
tightly-focussed AI Bill in the November 2023 King’s Speech”.⁴³

30. In its response to our interim Report, received in November 2023, the Government
said that “… rather than rushing to legislate, we want to simultaneously learn about model
capabilities and risks, while also carefully considering the frameworks for action”.⁴⁴ In
December 2023 the Secretary of State for Science, Innovation and Technology told us that
the Government would not bring forward an AI-specific Bill before the General Election,
given the time that it would likely take to become law:

The key here is timing. We are not saying that we would never legislate in
this space. Of course we would… every Government will have to legislate
eventually. The point is that we do not want to rush to legislate and get this
wrong. We do not want to stifle innovation.⁴⁵

31. The Government response to the AI White Paper consultation confirmed this
and argued that the proposed “… non-statutory approach currently offers critical
adaptability—especially while we are still establishing our approach”.⁴⁶ The Government
has also emphasised the importance of the safety-related voluntary commitments secured
from leading AI developers ahead of the AI Safety Summit at Bletchley Park,⁴⁷ and the
ongoing safety testing work being undertaken by the AI Safety Institute.⁴⁸ We will return
to the role of the AI Safety Institute in Chapter 6.

32. Despite the expressed preference for a principles-based approach, the AI White Paper
consultation response confirmed that the Government would bring forward legislation
targeted at the most capable, general-purpose AI models and tools if such legislation
became necessary, specifically if:

… we determined that existing mitigations were no longer adequate and we


had identified interventions that would mitigate risks in a targeted way…
if we were not sufficiently confident that voluntary measures would be
implemented effectively by all relevant parties and if we assessed that risks
could not be effectively mitigated using existing legal powers.⁴⁹

42 A pro-innovation approach to AI regulation, CP 815, Department for Science, Innovation and Technology,
29 March 2023, p. 6
43 Science, Innovation and Technology Committee, Ninth Report of Session 2022–23, The Governance of Artificial
Intelligence: interim report, HC 1769, para 106
44 Science, Innovation and Technology Committee, First Special Report of Session 2023–24, The governance of
artificial intelligence: interim report: Government response to the Committee’s Ninth report, HC 248, p. 8
45 Q757
46 A pro-innovation approach to AI regulation: Government response to consultation, CP 1019, Department for
Science, Innovation and Technology, 6 February 2024, p. 14
47 Leading frontier AI companies publish safety policies, GOV.UK, 27 October 2023
48 World leaders, top AI companies set out plan for safety testing of frontier as first global AI Safety Summit
concludes, GOV.UK, 2 November 2023
49 A pro-innovation approach to AI regulation: Government response to consultation, CP 1019, Department for
Science, Innovation and Technology, 6 February 2024, p. 33
14 Governance of artificial intelligence (AI)

In April 2024 it was reported by the Financial Times that Government officials were “…
beginning to craft new legislation to regulate artificial intelligence”, and that “… such
legislation would likely put limits on the production of large language models, the general-
purpose technology that underlies AI products such as OpenAI’s ChatGPT”.⁵⁰

33. The next Government should stand ready to introduce new AI-specific legislation,
should an approach based on regulatory activity, existing legislation and voluntary
commitments by leading developers prove insufficient to address current and potential
future harms associated with the technology.

34. The Government should in its response to this Report provide further consideration
of the criteria on which a decision to legislate will be triggered, including which model
performance indicators, training requirements such as compute power or other factors
will be considered.

35. The next Government should commit to laying before Parliament quarterly
reviews of the efficacy of its current approach to AI regulation, including a summary
of technological developments related to its stated criteria for triggering a decision to
legislate, and an assessment whether these criteria have been met.

50 UK rethinks AI legislation as alarm grows over potential risks, Financial Times, 15 April 2024
Governance of artificial intelligence (AI) 15

4 The role of regulators


36. In addition to existing legislation and voluntary measures, the Government’s high-
level principles have begun to be translated into sector-specific action by regulators.⁵¹
Our interim Report detailed different views about the capacity of individual regulators
to respond to the growing use of AI,⁵² and the Government has also acknowledged that
different regulators are at different stages of readiness.⁵³ In this Chapter we assess the role
of regulators in the UK’s AI governance framework.

Powers
37. Since the publication of our interim Report we have examined the preparedness of
regulators to respond to AI’s increasing prevalence, including the Competition and Markets
Authority (CMA), Financial Conduct Authority (FCA), Information Commissioner’s
Office (ICO) and Ofcom; as well as the Digital Regulation Cooperation Forum (DRCF),
described as a co-ordination mechanism or “connective tissue” by its Chief Executive,
Kate Jones.⁵⁴

38. Whilst all four regulators told us that they were well-placed to respond to the
growing use of AI in their respective sectors,⁵⁵ Nikhil Rathi, Chief Executive of the FCA,
has described the challenge of “… managing the balance between transparency, fairness
to firms and competitiveness”, particularly in the announcement of investigations.⁵⁶
Dame Melanie Dawes, Chief Executive of Ofcom, the communications and online safety
regulator, said that she was in favour of “… the Government having a policy function to
scan where there may be gaps in the regulatory landscape”.⁵⁷

39. A regulatory gap analysis was recommended in our interim Report,⁵⁸ and in its
response to the AI White Paper consultation the Government said that it recognised:

… the need to assess the existing powers and remits of the UK’s regulators
to ensure they are equipped to address AI risks and opportunities in their
domains and implement the principles in a consistent and comprehensive
way. We will, therefore, work with government departments and regulators
to analyse and review potential gaps in existing regulatory powers and
remits.⁵⁹

51 A pro-innovation approach to AI regulation: Government response to consultation, CP 1019, Department for


Science, Innovation and Technology, 6 February 2024, p. 13
52 Science, Innovation and Technology Committee, Ninth Report of Session 2022–23, The Governance of Artificial
Intelligence: interim report, HC 1769, para 97
53 A pro-innovation approach to AI regulation, CP 815, Department for Science, Innovation and Technology,
29 March 2023, p. 15
54 Q610
55 Competition and Markets Authority (GAI0124), Financial Conduct Authority (GAI0125), Information
Commissioner’s Office (GAI0112), Ofcom (GAI0126)
56 Navigating the UK’s Digital Regulation Landscape: Where are we headed? Financial Conduct Authority, 22 April
2024
57 Q547
58 Science, Innovation and Technology Committee, Ninth Report of Session 2022–23, The Governance of Artificial
Intelligence: interim report, HC 1769, para 104
59 A pro-innovation approach to AI regulation: Government response to consultation, CP 1019, Department for
Science, Innovation and Technology, 6 February 2024, p. 15
16 Governance of artificial intelligence (AI)

40. We welcome confirmation that the Government will undertake a regulatory gap
analysis to determine whether regulators require new powers to respond properly to
the growing use of AI, as recommended in our interim Report. However, as the end of
this Parliament approaches, there is no longer time to bring forward any updates to
current regulatory remits and powers, should they be discovered to be necessary. This
could constrain the ability of regulators to properly implement the Government’s AI
principles and undermine the UK’s overall approach.

41. The next Government should conduct and publish the results its regulatory gap
analysis as soon as is practicable. If the analysis identifies any legislation required to
close regulatory gaps, this should be brought forward in time for it to be enacted as soon
as possible after the General Election.

Coordination
42. The Government has said that regulators will need to coordinate with each other
when implementing its AI principles,⁶⁰ whilst our interim Report concluded that it would
likely need to establish “a more well-developed central coordinating function” than that
proposed in the AI White Paper.⁶¹

43. Will Hayter, Executive Director for Digital Markets at the CMA, told us that the four
DRCF member regulators “… all understand throughout our digital regulation activities
that we need to talk to one another, sharing expertise and attempting to be more than the
sum of our parts”.⁶² Similarly, four national bodies in the field of health—the Care Quality
Commission, Health Research Authority, Medicines and Healthcare Products Regulatory
Agency and National Institute for Health and Care Excellence—have launched a joint
Artificial Intelligence and Digital Regulations Service, which “… aims to clearly set out
the information and guidance that developers and adopters need to follow to develop safe,
innovative technologies in health and social care”.⁶³

44. In its response to the AI White Paper consultation, the Government confirmed that
it would set up “a steering committee with government representatives and key regulators
to support knowledge exchange and coordination on AI governance”.⁶⁴ A first iteration of
guidance for regulators on how to implement the five principles has also been published.⁶⁵

45. The general-purpose nature of AI will, in some instances, lead to regulatory


overlap, and a potential blurring of responsibilities. This could create confusion on the
part of consumers, developers and deployers of the technology, as well as regulators
themselves. The steering committee that the Government has said it will establish
should be empowered to provide guidance and, where necessary, direction to help
regulators navigate any overlapping remits, whilst respecting the independence of the
UK’s regulators.
60 A pro-innovation approach to AI regulation, CP 815, Department for Science, Innovation and Technology,
29 March 2023, p. 42
61 Science, Innovation and Technology Committee, Ninth Report of Session 2022–23, The Governance of Artificial
Intelligence: interim report, HC 1769, para 103
62 Q585
63 Artificial Intelligence and Digital Regulations Service launches, NHS Health Research Authority, 7 March 2023
64 A pro-innovation approach to AI regulation: Government response to consultation, CP 1019, Department for
Science, Innovation and Technology, 6 February 2024, p. 15
65 Implementing the UK’s AI Regulatory Principles: Initial Guidance for Regulators, Department for Science,
Innovation and Technology, 6 February 2024
Governance of artificial intelligence (AI) 17

46. The regulatory gap analysis being undertaken by the Government should identify,
in consultation with the relevant regulators and co-ordinating entities such as the Digital
Regulation Cooperation Forum and the AI and Digital Regulations Service, areas
where new AI models and tools will necessitate closer regulatory co-operation, given the
extent to which some uses for AI, and some of the challenges these can present—such as
accelerating existing biases—are covered by more than one regulator. The gap analysis
should also put forward suggestions for delivering this co-ordination, including joint
investigations, a streamlined process for regulatory referrals, and enhanced levels of
information sharing.

Resourcing
47. Some regulators, such as the FCA, are “… funded entirely by the fees [they] charge
regulated firms”.⁶⁶ Similarly, Ofcom receives fees from “… the companies we regulate…
these could be broadcasters, telecoms providers, or firms in the postal sector”. The fees
are based on company revenues and the amount of work undertaken by Ofcom in their
sectors, although an overall spending cap is applied through the Government Spending
Review process.⁶⁷ Other regulators, such as the Information Commissioner’s Office,
supplement fees with a Government grant-in-aid.⁶⁸

48. The picture is similarly mixed in other jurisdictions. In the European Union (EU),
some decentralised agencies such as the European Medicines Agency are funded through
a combination of regulatory fees and contributions from the EU budget;⁶⁹ whilst other
such as the EU Intellectual Property Office are “… financed through registration fees
without imposing any burden on the EU or its taxpayers”.⁷⁰ In the United States, bodies
such as the Federal Trade Commission and Food and Drug Administration rely upon a
combination of user fees and budget requests submitted to Congress.⁷¹

49. In a Report examining regulator’s performance, published in February 2024,


the House of Lords Industry and Regulators Committee expressed concern that some
regulators in the UK “… appear not to have sufficient resources to carry out their existing
functions effectively, while others have had their responsibilities extended without an
increase in resources to match”.⁷²

50. Our Inquiry heard that some regulators would require additional support to help
them meet the challenges posed by the growing prevalence of AI in their sectors.⁷³ Dame
Melanie Dawes, Chief Executive of Ofcom, told us that Ofcom had experienced:

66 About the FCA, Financial Conduct Authority, accessed 23 May 2024


67 How is Ofcom funded? Ofcom, 25 April 2024
68 How we are funded, Information Commissioner’s Office, accessed 23 May 2024
69 Funding, European Medicines Agency, accessed 23 May 2024
70 About us, European Union Intellectual Property Office, accessed 23 May 2024
71 Congressional Budget Justification Fiscal Year 2025, Federal Trade Commission, accessed 23 May 2024; FY 2025
FDA Budget Summary, Food and Drug Administration, accessed 23 May 2024
72 House of Lords Industry and Regulators Committee, First Report of Session 2023–24, Who watches the
watchdogs? Improving the performance, independence and accountability of UK regulators, HL Paper 56, para
134
73 A pro-innovation approach to AI regulation, CP 815, Department for Science, Innovation and Technology,
29 March 2023, p. 62
18 Governance of artificial intelligence (AI)

… a flat cash budget cap from the Treasury for many years now. I think at
some point that will start to create real constraints for us. We have become
very good at driving efficiency, but if the Government were to ask us to do
more in the field of AI, we would need new resources to be able to do that.⁷⁴

51. Similar concerns were noted by the Committee on Standards in Public Life (CSPL),
an advisory non-departmental public body that has examined the implications of AI’s
increasing prevalence, including for regulators.⁷⁵ The CSPL has said that whilst many
regulators had identified AI as a strategic priority, “… some regulators told us that because
they operate under restricted financial resources, the speed and scale at which they can
address the implications of AI is limited”.⁷⁶

52. The Secretary of State for Science, Innovation and Technology and Sarah Munby,
Permanent Secretary at the Department for Science, Innovation and Technology (DSIT)
told us that the Government would continue to support regulators as required.⁷⁷ In its AI
White Paper consultation response, the Government said that it had asked a number of
regulators to set out publicly how they intend to respond to the use of AI in their respective
sectors.⁷⁸

53. In addition to a “… central function to support effective risk monitoring, regulator


coordination, and knowledge exchange” the AI White Paper consultation response
announced £10 million in funding to “jumpstart regulators’ AI capabilities” and “… help
our regulators develop cutting-edge research and practical tools to build the foundations
of their AI expertise and everyday ability to address AI risks… “.⁷⁹ It has also highlighted
the role of the DRCF in the dissemination of best practice.⁸⁰

54. If this £10 million were divided equally amongst the 14 regulators who were asked
to publish their intended approaches to AI by the end of April 2024,⁸¹ each would receive
an amount equivalent to approximately 0.0085% of the reported annual UK turnover of
Microsoft in the year to June 2023.⁸²

55. The increasing prevalence and general-purpose nature of AI will create challenges
for the UK’s sectoral regulators, however expert they may be. The AI challenge
can be summed up in a single word: capacity. Ofcom, for example, is combining
implementation of a broad new suite of powers conferred on it by the Online Safety
Act 2023, with formulating a comprehensive response to AI’s deployment across its
wider remit. Others will be required to undertake resource-intensive investigations
and it is vital that they are able, and empowered, to do so. All will be required to pay
greater attention to the outputs of AI tools in their sectors, whilst paying due regard to
existing innovation and growth-related objectives.
74 Q543
75 Committee on Standards in Public Life (GAI0110)
76 Artificial intelligence and public standards: an update on progress made against our 2020 recommendations,
Committee on Standards in Public Life, 6 March 2024
77 Qq. 767–778
78 A pro-innovation approach to AI regulation: Government response to consultation, CP 1019, Department for
Science, Innovation and Technology, 6 February 2024, p. 14
79 A pro-innovation approach to AI regulation: Government response to consultation, CP 1019, Department for
Science, Innovation and Technology, 6 February 2024, p. 7
80 A pro-innovation approach to AI regulation: Government response to consultation, CP 1019, Department for
Science, Innovation and Technology, 6 February 2024, p. 55
81 Regulators’ strategic approaches to AI, Department for Science, Innovation and Technology, 1 May 2024
82 Jobs boost at Microsoft as UK revenues hit £8bn, The Times, 27 April 2024
Governance of artificial intelligence (AI) 19

56. The announced £10 million to support regulators in responding to the growing
prevalence of AI is clearly insufficient to meet the challenge, particularly when
compared to the UK revenues of leading AI developers.

57. The next Government must announce further financial support, agreed in
consultation with regulators, that is commensurate to the scale of the task. It should also
consider the benefits of a one-off or recurring industry levy, that would allow regulators
to supplement or replace support from the Exchequer for their AI-related activities.
20 Governance of artificial intelligence (AI)

5 AI in the public and private sectors


58. The Government has paid increasingly close attention to the deployment of AI in
the public sector and announced a number of initiatives to deliver its ambitions. In this
Chapter, we assess the Government’s efforts to support increased uptake of AI in the
public and private sectors.

Public sector
59. A National Audit Office (NAO) report found in 2023 that the Central Digital and Data
Office (part of the Cabinet Office), DSIT and HM Treasury began to develop a strategy for
AI adoption in the public sector. The NAO found that “high-level activities and timescales”
had been agreed, but the draft strategy did not “… set out which department has overall
ownership of the strategy and accountability for its delivery or how it will be funded and
resourced. Performance measures are also still to be determined”.⁸³

i.AI

60. In November 2023 the Deputy Prime Minister and Chancellor of the Duchy of
Lancaster, and Secretary of State in the Cabinet Office (henceforth Deputy Prime Minister),
Rt. Hon. Oliver Dowden CBE MP, announced the establishment of an Incubator for
Artificial Intelligence (i.AI). This is comprised of an initial team of 30 technical experts
“… to design and implement AI solutions across government departments to drive
improvements in public service delivery”,⁸⁴ and was subsequently confirmed that the i.AI
team would increase in size to 70, having instigated 10 pilot programmes since its launch.⁸⁵

61. The NAO has found that i.AI will require an estimated £101 million in funding over
five years between 2024–25 and 2028–29.⁸⁶ Examples of its work to date have included
i.AI assisting the Public Sector Fraud Authority in the development of AI-enabled fraud
detection tools.⁸⁷

Public sector productivity programme

62. Announced by the Chancellor of the Exchequer, Rt. Hon. Jeremy Hunt MP, in June
2023 and detailed at subsequent fiscal events,⁸⁸ the public sector productivity programme
has been described as seeking to secure “the potential productivity benefits from applying
AI to routine tasks across the public sector”, in services such as education and policing.⁸⁹

63. At the March 2024 Spring Budget the Chancellor announced that £4.2 billion
of funding would be allocated to “… the strongest productivity releasing projects that

83 Use of artificial intelligence in government, National Audit Office, HC 612, p. 18


84 A pro-innovation approach to AI regulation: Government response to consultation, CP 1019, Department for
Science, Innovation and Technology, 6 February 2024, p. 21
85 Deputy Prime Minister speech on AI for Public Good, GOV.UK, 29 February 2024
86 Use of artificial intelligence in government, National Audit Office, HC 612, p. 17
87 “Criminals should be aware” says Minister as Government upgrades AI fraud detection tool, GOV.UK, 14 March
2024
88 Hunt announces ‘most ambitious public sector productivity review ever’, Civil Service World, 13 June 2023
89 Chancellor to cut admin workloads to free up frontline staff, GOV.UK, 18 November 2023
Governance of artificial intelligence (AI) 21

departments have identified through the programme to date”.⁹⁰ These include the
deployment of AI to improve or automate existing processes, such as a £3.4 billion by
2030 for a “technological and digital transformation” of the NHS.⁹¹

64. HM Treasury also confirmed in a paper published alongside the Budget that it would,
working in partnership with the Cabinet Office and i.AI, assist in the delivery of “… AI
adoption plans for every department in time for the next Spending Review and… expand
the application of automation and AI across the range of priority areas”.⁹²

Ministerial and Cabinet groups

65. The Government has outlined mechanisms to coordinate departmental AI activity,


including an Inter-Ministerial Group and the designation of lead AI Ministers across all
departments “… to bring together work on risks and opportunities driven by AI in their
sectors and to oversee implementation of frameworks and guidelines for public sector
usage of AI”.⁹³

Encouraging adoption

66. Both the Cabinet Office and DSIT have announced initiatives to underpin adoption
of AI across the public sector. The former has confirmed that it will “… [improve] digital
infrastructure and access to data sets, and [develop] centralised standards”,⁹⁴ whilst the
Central Digital and Data Office, part of the Cabinet Office, has published guidance on the
use of generative AI in Government.⁹⁵

67. Government Digital Service, also part of the Cabinet Office, has experimented with
a generative AI chatbot, GOV.UK Chat, but found that its “… answers did not reach the
highest level of accuracy demanded for a site like GOV.UK, where factual accuracy is
crucial”.⁹⁶ The Cabinet Office and Infrastructure Project Authority have also encouraged
“… responsible experimentation with AI to find solutions to the biggest challenges in
public projects”.⁹⁷

68. The Government response to the AI White Paper consultation confirmed that use of
the Algorithmic Transparency Recording Standard (ATRS), a tool designed to increase
transparency from public sector organisations about how they use algorithmic tools to
support decision-making,⁹⁸ would become a requirement for all Government departments
during 2024 and “… across the broader public sector over time”.⁹⁹

90 Spring Budget 2024, HC 560, HM Treasury, 6 March 2024, p. 31


91 Spring Budget 2024, HC 560, HM Treasury, 6 March 2024, pp. 31–35
92 Seizing the opportunity: delivering efficiency for the public, HM Treasury, 6 March 2024, p. 32
93 A pro-innovation approach to AI regulation: Government response to consultation, CP 1019, Department for
Science, Innovation and Technology, 6 February 2024, p. 17
94 A pro-innovation approach to AI regulation: Government response to consultation, CP 1019, Department for
Science, Innovation and Technology, 6 February 2024, p. 22
95 Generative AI Framework for HMG, Cabinet Office and Central Digital & Data Office, 18 January 2024
96 The findings of our first generative AI experiment: GOV.UK Chat, Inside GOV.UK, 18 January 2024
97 Government to harness the power of AI to improve public project delivery under new framework, GOV.UK,
20 March 2024
98 Algorithmic Transparency Recording Standard Hub, Central Digital and Data Office and Department for Science,
Innovation and Technology, accessed 23 May 2024
99 A pro-innovation approach to AI regulation: Government response to consultation, CP 1019, Department for
Science, Innovation and Technology, 6 February 2024, pp. 22, 40
22 Governance of artificial intelligence (AI)

Private sector
69. The Government has also launched a series of initiatives intended to support private
sector adoption of AI. The Secretary of State for Science, Innovation and Technology
and the Prime Minister’s Special Adviser on Business and Investment, Lord Petitgas,
established an AI Opportunity Forum with business leaders. It discussed interrelated
issues such as “… AI culture and skills of organisations in the UK, how they manage
governance, awareness, and risks of the technology, and the availability of data”.¹⁰⁰

70. The Forum has also discussed the challenges that businesses have faced when
seeking to adopt AI, including access to data and access to compute¹⁰¹—two Challenges
of AI Governance highlighted in our interim Report and discussed in Chapter 6 of this
Report. The Government has said that the Forum will produce “a product that will inspire
businesses, whether they are a Silicon Roundabout start-up or a family-run firm, to start
using AI”, but had yet to confirm a publication date when our Report was finalised.¹⁰²

71. The Chancellor also announced the launch of an SME Digital Adoption Taskforce at
the Spring Budget 2024, along with an upskilling fund pilot targeted at SMEs. The pilot,
applications for which closed on 31 May, will make £6.4 million of grant funding available
in 2024–25.¹⁰³ Successful applicants may receive up to 50% of the cost of “… training
which supports employees to develop their technical skills and/or understanding of AI to
be able to develop, deploy, or use AI in their role”.¹⁰⁴

72. DSIT has produced guidance on AI assurance, which detailed how organisations
could “… measure and evaluate their systems and communicate that their systems are
trustworthy and aligned with relevant regulatory principles”.¹⁰⁵ It has also supported the
establishment of an AI and Digital Hub aimed at “… innovators with queries concerning
cross-regulatory AI and digital issues”, led by the Digital Regulation Cooperation Forum,
a co-ordinating body.¹⁰⁶

73. AI can be used to increase productivity and augment the contributions of human
workers in both the public and private sectors. We welcome the establishment of i.AI
and the focus on AI deployment set out in the public sector productivity programme;
as well as initiatives to increase business adoption such as the AI and Digital Hub.

74. The next Government should drive safe adoption of AI in the public sector via
i.AI, the National Science and Technology Council and designated lead departmental
Ministers for AI.

75. In its response to this Report, the Government should confirm the full list of public
sector pilots currently being led or supported by i.AI, the criteria that determined i.AI
pilot project selections, how it intends to evaluate their success and decide whether to
roll them out more widely, and what other pilots are planned for the remainder of 2024.

100 Business and tech heavyweights to boost productivity through AI, GOV.UK, 25 January 2024
101 AI Opportunity Forum holds first meeting, GOV.UK, 15 February 2024
102 AI Opportunity Forum holds penultimate meeting, GOV.UK, 9 May 2024
103 Spring Budget 2024, HC 560, HM Treasury, 6 March 2024, p. 61
104 Flexible AI Upskilling Fund pilot: open for applications, Department for Science, Innovation and Technology,
accessed 23 May 2024
105 Introduction to AI Assurance, Department for Science, Innovation and Technology, 12 February 2024, p. 15
106 AI and Digital Hub, Digital Regulation Cooperation Forum, accessed 23 May 2024
Governance of artificial intelligence (AI) 23

76. i.AI should undertake an assessment of the existing civil service workforce’s AI
capability, identify areas of the public sector that would benefit the most from the use of
AI and where value for money can be delivered, set out how potential risks associated
with its use should be mitigated, and publish a detailed AI public sector action plan.
Progress against these should be reported to Parliament on an annual basis and through
regular written or oral statements by Ministers.

77. The requirement for Government departments to use the Algorithmic Transparency
Recording Standard should be extended to all public bodies sponsored by Government
departments, from 1 January 2025.
24 Governance of artificial intelligence (AI)

6 The AI Safety Institute


78. In our interim Report we welcomed the establishment of the then-Foundation
Model Taskforce.¹⁰⁷ The Taskforce has subsequently become a permanent entity as the AI
Safety Institute,¹⁰⁸ and assumed a key role in the UK’s AI governance framework. In this
Chapter we examine the role of the Institute, its priorities, and the AI Safety Summits it
has informed.

From Taskforce to Institute


79. Shortly after the publication of our interim Report, the Foundation Model Taskforce
was renamed the Frontier Model Taskforce and announced that several leading AI
researchers and technical organisations had joined or partnered with it.¹⁰⁹ In its first
progress report, the Taskforce said that it had offered these individuals:

… the opportunity to fundamentally alter society’s approach to tackling


risks at the frontier of AI. These researchers and engineers will bring their
skills towards giving the government the capability to work directly on
frontier AI models and evaluate their risks—through model evaluations,
red-teaming,¹¹⁰ and other aspects of safety infrastructure.¹¹¹

To date, the Institute has recruited both academic researchers, from institutions including
Cambridge University, the University of Oxford and Harvard University; and technical
experts with experience at leading developers, including Google DeepMind, Microsoft
and OpenAI.¹¹²

80. It is a credit to the commitment of those involved that the AI Safety Institute
has been swiftly established, with an impressive and growing team of researchers and
technical experts recruited from leading developers and academic institutions. The
next Government should continue to empower the Institute to recruit the talent it needs.

The AI Safety Summit


81. The need to evaluate the most advanced AI models from a safety perspective was
emphasised by the Prime Minister, Rt. Hon. Rishi Sunak MP, during a speech in June 2023.

107 Science, Innovation and Technology Committee, Ninth Report of Session 2022–23, The Governance of Artificial
Intelligence: interim report, HC 1769, para 108
108 Introducing the AI Safety Institute, CP 960, Department for Science, Innovation and Technology, 2 November
2023. The Institute will receive a continuation of the Taskforce’s 2024–25 funding as an annual amount for the
remainder of this decade.
109 Frontier AI Taskforce: first progress report, Department for Science, Innovation and Technology, 7 September
2023
110 Red-teaming is a military term, described by the Ministry of Defence as intended to challenge existing thinking
by seeking “… an external viewpoint separate to that of ‘home team’ decision-makers and problem solvers”.
In AI model development, red-teaming processes subject models to technical attacks to identify weaknesses
and create a more robust product. Google’s Red Team “… consists of a team of hackers that simulate a variety
of adversaries, ranging from nation states and well-known Advanced Persistent Threat groups to hacktivists,
individual criminals or even malicious insiders”.
111 Frontier AI Taskforce: first progress report, Department for Science, Innovation and Technology, 7 September
2023
112 Frontier AI Taskforce: first progress report, Department for Science, Innovation and Technology, 7 September
2023; Frontier AI Taskforce: second progress report, 30 October 2023; AI Safety Institute: third progress report,
Department for Science, Innovation and Technology, 5 February 2024
Governance of artificial intelligence (AI) 25

He said that for the UK “… leading on AI also means leading on AI safety”.¹¹³ In its second
progress report the then-Taskforce said that it was “… critical that frontier AI systems
are developed safely and that the potential risks of new models are rigorously and
independently assessed for harmful capabilities before and after they are deployed”.¹¹⁴

82. The Prime Minister said that his vision for the UK as a leader in AI would be
realised via three strands of work: that undertaken by the Taskforce, the pursuit of global
cooperation, and the deployment of AI “… to improve people’s lives”.¹¹⁵ The first two have
been initiated through the work of the AI Safety Institute and the organisation of an AI
Safety Summit at Bletchley Park in November 2023.¹¹⁶ The Government set five objectives
for the Summit:

• a shared understanding of the risks posed by frontier AI and the need for action;

• a forward process for international collaboration on frontier AI safety, including


how best to support national and international frameworks;

• appropriate measures which individual organisations should take to increase


frontier AI safety;

• areas for potential collaboration on AI safety research, including evaluating


model capabilities and the development of new standards to support governance;
and

• showcase how ensuring the safe development of AI will enable AI to be used for
good globally.¹¹⁷

What was achieved?

83. Shortly before the Summit, the Prime Minister confirmed that the Frontier AI
Taskforce would become a permanent body, the AI Safety Institute.¹¹⁸ Seven leading
AI developers—Amazon, Anthropic, Google DeepMind, Inflection, Meta, Microsoft
and OpenAI—published their safety policies ahead of the Summit.¹¹⁹ These, together
with a collection of discussion papers prepared by the Government and the Taskforce,¹²⁰
informed the roundtable discussions held over two days at Bletchley Park.¹²¹

113 PM London Tech Week speech, GOV.UK, 12 June 2023


114 Frontier AI Taskforce: second progress report, 30 October 2023
115 PM London Tech Week speech, GOV.UK, 12 June 2023
116 AI Safety Summit: introduction, Department for Science, Innovation and Technology, 31 October 2023
117 AI Safety Summit: introduction, Department for Science, Innovation and Technology, 31 October 2023
118 Prime Minister’s speech on AI: 26 October 2023, GOV.UK, 26 October 2023
119 Leading frontier AI companies publish safety policies, GOV.UK, 27 October 2023
120 Capabilities and risks from frontier AI: A discussion paper on the need for further research into AI risk,
Department for Science, Innovation and Technology, 25 October 2023; Future Risks of Frontier AI: Which
capabilities and risks could emerge at the cutting edge of AI in the future?, Government Office for Science, 27
October 2023; Emerging processes for frontier AI safety, Department for Science, Innovation and Technology,
27 October 2023; Safety and Security Risks of Generative Artificial Intelligence to 2025, HM Government, 27
October 2023
121 AI Safety Summit 2023: Roundtable Chairs’ Summaries, 1 November, Department for Science, Innovation and
Technology, 1 November; AI Safety Summit 2023: Roundtable Chairs’ Summaries, 2 November, Department for
Science, Innovation and Technology, 3 November 2023
26 Governance of artificial intelligence (AI)

84. Speaking to us after the Summit, the Prime Minister’s Summit representative and
Chair of the Advanced Research and Invention Agency, Matt Clifford CBE, said that in
addition to placing AI safety on the international agenda, the Summit had achieved four
substantive outcomes:

• the Bletchley Declaration on AI Safety, signed by 28 attending nations and the


European Union;¹²²

• agreement that a State of the Science report will be produced by expert


representatives of the 28 countries,¹²³ inspired, Mr Clifford told us, by the
Intergovernmental Panel on Climate Change;

• agreement by nine leading AI developers¹²⁴ “… to work with Governments,


including national security actors, to do pre-deployment testing of those models
for the most extreme risks”;¹²⁵ and

• agreement that further AI Safety Summits would be hosted by the Republic of


Korea and France.¹²⁶

85. In May 2024, the AI Seoul Summit saw 16 leading developers agree to implement
eight commitments focused on mitigating the “severe risks” posed by the most advanced
AI; with operational updates promised by an early 2025 Summit, to be hosted by France.¹²⁷
These commitments are intended to deliver three outcomes:

• organisations effectively identify, assess and manage risks when developing and
deploying their frontier AI models and systems;

• organisations are accountable for safely developing and deploying their frontier
AI models and systems; and

• organisations’ approaches to frontier AI safety are appropriately transparent to


external actors, including governments.¹²⁸

Future priorities
86. The AI Safety Institute has said that it “… is not a regulator and will not determine
government regulation”; but would focus on the evaluation of advanced AI, drive

122 The Bletchley Declaration by Countries Attending the AI Safety Summit, 1–2 November 2023, Department for
Science, Innovation and Technology, 1 November 2023
123 ‘State of the Science’ Report to Understand Capabilities and Risks of Frontier AI: Statement by the Chair,
Department for Science, Innovation and Technology, 2 November 2023
124 The developers were Amazon Web Services, Anthropic, Google, Google DeepMind, Inflection AI, Meta,
Microsoft, Mistral AI, Open AI and xAI
125 Safety Testing: Chair’s Statement of Session Outcomes, Department for Science, Innovation and Technology,
2 November 2023
126 World leaders, top AI companies set out plan for safety testing of frontier as first global AI Safety Summit
concludes, GOV.UK, 2 November 2023; UK and Republic of Korea to build on legacy of Bletchley Park, GOV.UK,
12 April 2024; Qq. 658–659
127 Frontier AI Safety Commitments: AI Seoul Summit 2024, Department for Science, Innovation and Technology,
21 May 2024
128 Frontier AI Safety Commitments: AI Seoul Summit 2024, Department for Science, Innovation and Technology,
21 May 2024
Governance of artificial intelligence (AI) 27

foundational AI safety research, and facilitate information exchange between national


and international actors.¹²⁹ Emran Mian, Director General for Digital Technologies and
Telecoms at DSIT, explained to us why the Institute would undertake safety evaluations:

… the companies are doing quite a lot of the testing. That is great and
positive. It is right that they should do it and that they should contribute
to the development of the science on safety; but relying solely on the
companies gives us some pause for thought, both because of the commercial
imperatives that the companies may have in the rush to market, but also in
the classic issue that arises in these kinds of safety conversations, about
companies marking their own homework.¹³⁰

The Institute has emphasised that it would not “… designate any particular AI system as
‘safe’ … [nor] hold responsibility for any release decisions”.¹³¹

87. In its third progress report the Institute confirmed that it had begun testing models,¹³²
and that these had been selected based on “… estimates of the risk of a system possessing
harmful capabilities, using inputs such as compute used for training, as well as expected
accessibility”.¹³³ In May 2024, the Institute released the results of its evaluations of five
publicly-available models,¹³⁴ and has also announced a programme of joint work with its
counterpart in the United States that will include “… at least one joint testing exercise on
a publicly accessible model”.¹³⁵

88. Exactly which model, or models, the Institute will undertake pre-release testing
on was not confirmed by the Secretary of State for Science, Innovation and Technology
when questioned in the House of Commons,¹³⁶ and had not been confirmed at the time
our Report was finalised. It was reported by the news outlet Politico in April that only
Google DeepMind had allowed access to its Gemini model for pre-release testing, and that
Anthropic, Meta and OpenAI had yet to grant access to as-yet unreleased models.¹³⁷

89. Although the Institute is not a regulator, it has undeniably played a decisive
role in shaping the UK’s regulatory approach to AI. We commend the work of the
Institute and its researchers in facilitating and informing the ongoing international
conversation about AI governance.

90. However, we are concerned by suggestions that the Institute has been unable to
access as-yet unreleased AI models to perform the pre-deployment safety testing it
was set up to undertake. If true, this would undermine the delivery of the Institute’s
mission and its ability to increase public trust in the technology.

91. In its response to this Report, the Government should confirm which models the
AI Safety Institute has undertaken pre-deployment safety testing on, the nature of the

129 Introducing the AI Safety Institute, CP 960, Department for Science, Innovation and Technology, p. 8
130 Q667
131 Introducing the AI Safety Institute, CP 960, Department for Science, Innovation and Technology, p. 9
132 AI Safety Institute: third progress report, Department for Science, Innovation and Technology, 5 February 2024
133 AI Safety Institute approach to evaluations, Department for Science, Innovation and Technology, 9 February
2024
134 Advanced AI evaluations at AISI: May update, AI Safety Institute, 20 May 2024
135 UK & United States announce partnership on science of AI safety, GOV.UK, 2 April 2024
136 HC Deb, 17 April 2024, col 288 (Commons Chamber)
137 Rishi Sunak struggles to implement his ‘landmark’ AI testing deal, Politico, 19 April 2024
28 Governance of artificial intelligence (AI)

testing, a summary of the findings, whether any changes were made by the model’s
developers as a result, and whether any developers were asked to make changes but
declined to do so.

92. The Government should also confirm which models the Institute has been unable
to secure access to, and the reason for this. If any developers have refused access—
which would represent a contravention of the reported agreement at the November
2023 Summit at Bletchley Park—the Government should name them and detail their
justification for doing so.
Governance of artificial intelligence (AI) 29

7 The international dimension


93. The UK is not the only jurisdiction to have dedicated significant thought and resource
to the development of its AI governance framework. The United States and the European
Union have over the course of our Inquiry debated and developed their own regulatory
regimes. In this Chapter we will examine the steps taken by policymakers in Washington,
D. C. and Brussels.

The United States


94. The United States (US) is home to leading AI developers and start-ups such as
OpenAI and Anthropic.¹³⁸ It has led each edition of the Tortoise AI Index, a ranking of
countries by AI implementation, innovation and investment.¹³⁹ According to data from
the Organisation for Economic Co-operation and Development (OECD), the US since
2012 has accounted for an average of 56% of global annual venture capital investment in
AI and led the ranking for this metric in all but two years during the same period.¹⁴⁰

95. Leading ‘Big Tech’ firms based in the US have also emerged as significant investors
in AI,¹⁴¹ with Microsoft having reportedly committed up to $13 billion to OpenAI, the
developer behind ChatGPT, including a $10 billion investment announced in January
2023.¹⁴²

Voluntary commitments

96. Just as the UK secured voluntary commitments from leading AI developers,¹⁴³ the
White House announced in July 2023 that Amazon, Anthropic, Google, Inflection, Meta,
Microsoft, and OpenAI had agreed to make “… voluntary commitments… to help move
toward safe, secure, and transparent development of AI technology”.¹⁴⁴ A further eight
companies signed up to the commitments in September.¹⁴⁵

97. The White House has described the voluntary commitments as intended “… to
advance a generative AI legal and policy regime” and said that it envisioned them “…
remain[ing] in effect until regulations covering substantially the same issues come into
force”.¹⁴⁶ The commitments included:

• internal and external red-teaming of models or systems in areas including


misuse, societal risks, and national security concerns;

138 Meet the generative AI startups pulling in the most cash, PitchBook, 18 October 2023
139 The Global AI Index, Tortoise, accessed 23 May 2024
140 Live data: top countries in VC investments in AI by industry, OECD.AI visualisations powered by JSI using data
from Preqin, accessed 23 May 2024
141 Big Tech outspends venture capital firms in AI investment frenzy, Financial Times, 29 December 2023
142 How Microsoft’s multibillion-dollar alliance with OpenAI really works, Financial Times, 15 December 2023
143 A pro-innovation approach to AI regulation: Government response to consultation, CP 1019, Department for
Science, Innovation and Technology, 6 February 2024, p. 7
144 FACT SHEET: Biden- Harris Administration Secures Voluntary Commitments from Leading Artificial Intelligence
Companies to Manage the Risks Posed by AI, The White House, 21 July 2023
145 FACT SHEET: Biden- Harris Administration Secures Voluntary Commitments from Eight Additional Artificial
Intelligence Companies to Manage the Risks Posed by AI, The White House, 12 September 2023
146 Voluntary AI Commitments, The White House, 12 September 2023, p. 1
30 Governance of artificial intelligence (AI)

• information sharing among companies and governments regarding trust and


safety risks, dangerous or emergent capabilities, and attempts to circumvent
safeguards;

• develop and deploy mechanisms that enable users to understand if audio or


visual content is AI-generated; and

• disclosure of model or system capabilities, limitations, and domains of


appropriate and inappropriate use, including discussion of societal risks, such as
effects on fairness and bias.¹⁴⁷

98. Although wide-ranging, the scope of the commitments was limited. The White
House said that “… where commitments mention particular models, they apply only to
generative models that are overall more powerful than the current most advanced model
produced by the company making the commitment”.¹⁴⁸

Executive Order

99. Direct industry engagement in the form of the voluntary commitments was
followed in October 2023 by the announcement of a Presidential Executive Order on the
safe, secure and trustworthy development and use of AI,¹⁴⁹ which although primarily
applicable to Federal departments and agencies, was notable for its utilisation of the
Defense Production Act, a law passed during the Cold War that has afforded Presidents
“… significant emergency authority to control domestic industries”.¹⁵⁰

100. The Executive Order detailed 26 requirements across eight areas, including standards
for AI safety and security, advancing equity and civil rights, and ensuring responsible and
effective Government use of AI.¹⁵¹ It placed safety testing, risk mitigation and reporting
requirements on developers of the most powerful models, with any model trained using
compute power above a set threshold required to comply.¹⁵²

State-level initiatives

101. The White House has stated its desire to pursue the successful passage of bipartisan
AI-specific legislation through the United States Congress,¹⁵³ but as the November 2024
Presidential election nears the Federal legislative process has slowed and the prospect of
new legislation being passed reduced.¹⁵⁴

147 Voluntary AI Commitments, The White House, 12 September 2023, pp. 1–3
148 Voluntary AI Commitments, The White House, 12 September 2023, p. 1
149 Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, The White
House, 30 October 2023
150 What Is the Defense Production Act? Council on Foreign Relations, 22 December 2021
151 FACT SHEET: President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence, The
White House, 30 October 2023
152 Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, The White
House, 30 October 2023
153 FACT SHEET: Biden- Harris Administration Secures Voluntary Commitments from Eight Additional Artificial
Intelligence Companies to Manage the Risks Posed by AI, The White House, 12 September 2023
154 Capitol Hill stunner: 2023 led to fewest laws in decades, Axios, 18 December 2023
Governance of artificial intelligence (AI) 31

102. In the absence of Federal legislation, policymakers in states such as California,¹⁵⁵


Colorado,¹⁵⁶ Florida¹⁵⁷ and New York¹⁵⁸ have introduced bills and measures targeted at the
development and deployment of AI. According to LexisNexis, a data analytics company,
“as of January 11 [2024], 89 bills referring to AI had been pre-filed or introduced in 20
states… [in addition to] more than 100 AI bills that are being carried over from last year”.¹⁵⁹

103. California’s Privacy Protection Agency has also published draft proposals for a
regime to establish individual “… opt-out rights, pre-use notice requirements and access
rights which would enable state residents to obtain meaningful information on how their
data is being used for automation and AI tech”.¹⁶⁰

The EU AI Act
104. In December 2023 political agreement was reached between representatives of the
European Parliament, the Council of the European Union and the European Commission
on the European Union (EU) AI Act.¹⁶¹ This was followed by the approval of ambassadors
from the 27 EU member states in February 2024,¹⁶² and a European Parliament ratification
vote in March 2024.¹⁶³ This represented the culmination of a legislative process that began
in April 2021 with the publication of draft proposals by the Commission.¹⁶⁴

A risk-based approach

105. The AI Act takes a ‘horizontal’,¹⁶⁵ risk-based approach, with AI uses categorised into
four levels of risk. The EU institutions have also agreed specific requirements for general-
purpose AI models. The risk categories are set out below.

Minimal risk

106. Examples of uses deemed to present minimal risk included AI-assisted recommender
systems, which suggest or recommend additional products to consumers,¹⁶⁶ or spam filters.
The Commission has said that “the vast majority” of uses for AI will fall into this category
and will not be subject to any additional requirements beyond existing legislation.¹⁶⁷

155 California’s privacy watchdog eyes AI rules with opt-out and access rights, TechCrunch, 27 November 2023
156 The Colorado AI Act: What you need to know, International Association of Privacy Professionals, 21 May 2024
157 What States are Making Moves in US AI Regulation in 2024? Holistic AI, 11 January 2024
158 What States are Making Moves in US AI Regulation in 2024? Holistic AI, 11 January 2024
159 State AI Legislation Off to Quick Start in 2024, LexisNexis, 16 January 2024
160 California’s privacy watchdog eyes AI rules with opt-out and access rights, TechCrunch, 27 November 2023
161 Commission welcomes political agreement on Artificial Intelligence Act, European Commission, 9 December
2023
162 EU countries give crucial nod to first-of-a-kind Artificial Intelligence law, Euractiv, 2 February 2024
163 Artificial Intelligence Act: MEPs adopt landmark law, European Parliament, 13 March 2024
164 Regulation of the European Parliament and the Council laying down harmonised rules on artificial intelligence
(Artificial Intelligence Act) and amending certain Union legislative acts, European Commission, 21 April 2021
165 AI in the EU and UK: two approaches to regulation and international leadership, UK in a changing Europe, 26
January 2023
166 Recommendation System, NVIDIA, accessed 23 May 2024
167 Artificial intelligence: questions and answers, European Commission, 12 December 2023
32 Governance of artificial intelligence (AI)

High-risk

107. An Annex to the AI Act listed uses to be designated as high-risk.¹⁶⁸ Examples included
tools used in recruitment, the judicial system or democratic processes.¹⁶⁹ The Commission
has said that the list will be updated as appropriate.¹⁷⁰ High-risk uses will be permitted
subject to compliance with requirements such as “… risk-mitigation systems, high quality
of data sets, logging of activity, detailed documentation, clear user information, human
oversight, and a high level of robustness, accuracy and cybersecurity”.¹⁷¹

Unacceptable risk

108. Once fully in force the AI Act will ban a small number of uses that have been deemed
to pose an unacceptable level of risk, including:

• real-time remote biometric identification in publicly accessible spaces by law


enforcement, subject to narrow exceptions;¹⁷²

• categorisation of people based on biometric data to deduce or infer their race,


political opinions, trade union membership, religious or philosophical beliefs
or sexual orientation. Law enforcement will still be permitted to filter datasets
based on biometric data;

• individual predictive policing, a practice that uses algorithms and data to make
crime-related predictions;¹⁷³

• emotion recognition in workplaces and education institutions, unless for medical


or safety reasons such as monitoring the tiredness levels of a pilot; and

• untargeted scraping of internet or CCTV for facial images to build up or expand


databases.¹⁷⁴

Transparency

109. The AI Act introduces transparency requirements to ensure that citizens are made
aware whenever they interact with a machine, such as in the use of chatbots. It also
introduced a requirement for “… AI generated content… to be labelled as such, and users
need to be informed when biometric categorisation or emotion recognition” tools are
deployed.¹⁷⁵

168 Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on
artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts, Annex III, Council
of the European Union, p. 248
169 Artificial intelligence: questions and answers, European Commission, 12 December 2023
170 Artificial intelligence: questions and answers, European Commission, 12 December 2023
171 Commission welcomes political agreement on Artificial Intelligence Act, European Commission, 9 December
2023
172 Artificial intelligence: questions and answers, European Commission, 12 December 2023
173 AI in policing and security, Parliamentary Office of Science and Technology, 29 April 2021
174 Artificial intelligence: questions and answers, European Commission, 12 December 2023
175 Artificial intelligence: questions and answers, European Commission, 12 December 2023
Governance of artificial intelligence (AI) 33

General-purpose AI

110. Under the provisions of the AI Act general-purpose models, including those that
underpin generative AI tools such as GPT-4¹⁷⁶ or Google Gemini¹⁷⁷ will be subject to
specific requirements designed to ensure transparency and provide reassurances to those
who deploy them.¹⁷⁸

111. For models that have been trained using compute power above a certain threshold
and that are deemed to potentially “… pose systemic risks, there will be additional binding
obligations related to managing risks and monitoring serious incidents, performing
model evaluation and adversarial testing”.¹⁷⁹ These will be put into practice “… through
codes of practices developed by industry, the scientific community, civil society and other
stakeholders together with the Commission”.¹⁸⁰

Implementation, enforcement and exemptions

112. The AI Act “… will be fully applicable 24 months after entering into force”,¹⁸¹ likely
meaning during June 2026.¹⁸² Some provisions, including the bans on prohibited uses
outlined above, will enter into force by the end of 2024 and others, such as the provisions
relating to general-purpose AI, are expected to apply from June 2025.¹⁸³ It will apply to
“public and private actors inside and outside the EU as long as the AI system is placed on
the Union market, or its use affects people located in the EU”.¹⁸⁴

113. Enforcement of the AI Act has been made a joint responsibility of designated
authorities in each EU member state,¹⁸⁵ and a new European AI Office within the
Commission, which has been tasked with enforcing the provisions that apply to general-
purpose AI.¹⁸⁶

114. The International Association of Privacy Professionals (IAPP), a trade body, has
pointed out that successful implementation of the AI Act will require skilled staff in each
member state authority, with “… sufficient expertise in fundamental rights law, personal
data protection and others”.¹⁸⁷ The same can be said of the Commission’s AI Office, which
has begun to recruit staff.¹⁸⁸

115. Some member states, such as Spain, have established new entities to act as their
designated authority. Others, such as Luxembourg, Ireland and the Netherlands, have
assigned the task to existing public bodies or Government departments.¹⁸⁹ The IAPP has
176 GPT-4, OpenAI, accessed 23 May 2024
177 Gemini, Google, accessed 23 May 2024
178 Commission welcomes political agreement on Artificial Intelligence Act, European Commission, 9 December
2023
179 Artificial intelligence: questions and answers, European Commission, 12 December 2023
180 Commission welcomes political agreement on Artificial Intelligence Act, European Commission, 9 December
2023
181 Artificial intelligence: questions and answers, European Commission, 12 December 2023
182 Commission presses governments to appoint AI regulators, Euronews, 3 April 2024
183 Commission presses governments to appoint AI regulators, Euronews, 3 April 2024
184 Artificial intelligence: questions and answers, European Commission, 12 December 2023
185 As the EU AI Act enters into force, focus shifts to countries’ oversight appointments, Euronews, 12 March 2024
186 Commission Decision Establishing the European AI Office, European Commission, 24 January 2024
187 Will the EU AI Act work? Lessons learned from past legislative initiatives, future challenges, International
Association of Privacy Professionals, 17 April 2024
188 Commission to look for head of AI Office only when law is fully approved, Euronews, 20 March 2024
189 As the EU AI Act enters into force, focus shifts to countries’ oversight appointments, Euronews, 12 March 2024
34 Governance of artificial intelligence (AI)

highlighted the uneven states of readiness among European regulators, with enforcement
of the AI Act described by some as “a tall order”.¹⁹⁰ Non-compliance can be punishable by
fines of up to €35 million or 7% of the offender’s total worldwide annual turnover in the
previous financial year, whichever is higher.¹⁹¹

116. The scope of the AI Act is not unlimited. The following uses are exempt:

• uses of AI designated as minimal risk, although providers can decide to adhere


to the AI Act in full, and sign up to voluntary codes of conduct;

• research, development and prototyping work done prior to a model being


released onto the market; and

• AI models and tools used solely for military, defence or national security.¹⁹²

We will examine the use of AI in a military, defence, and national security context in
Chapter 8 of this Report.

117. Speaking prior to the finalisation of the AI Act, the Secretary of State for Science,
Innovation and Technology said that she had heard “… deep concerns among some of its
member states that it will stifle innovation”.¹⁹³ She also argued that the AI Act’s risk-based
approach was “… quite a blunt tool… we are taking an approach that is a much more
context based”.¹⁹⁴

118. The Computer and Communications Industry Association, a trade body, has said
that the impact of the AI Act “… needs to be closely monitored to avoid overburdening
innovative AI developers with disproportionate compliance costs and unnecessary
red tape”.¹⁹⁵ French President Emmanuel Macron has also argued that the EU “… can
decide to regulate much faster and much stronger than our major competitors. But we
will regulate things that we will no longer produce or invent. This is never a good idea”.¹⁹⁶
Hugh Milward of Microsoft UK told us that the EU’s proposed approach was “… a model
of how not to do it”, citing similar concerns.¹⁹⁷

119. Beyond the EU AI Act, the Commission has proposed an Artificial Intelligence
Liability Directive that would create “… uniform rules for certain aspects of non-
contractual civil liability for damage caused with the involvement of AI systems”.¹⁹⁸ In late
2023 the EU institutions also agreed a new Directive on Liability for Defective Products
to replace the existing Product Liability Directive, which will account for the growing
prevalence of AI and allow individuals to bring claims against product and in some cases
product component manufacturers.¹⁹⁹

190 European regulators discuss AI Act enforcement, fines, International Association of Privacy Professionals, 10
April 2024
191 Artificial intelligence: questions and answers, European Commission, 12 December 2023
192 Artificial intelligence: questions and answers, European Commission, 12 December 2023
193 Q757
194 Q781
195 Generative AI Thriving in Competitive EU Market, New Study Finds, Computer and Communications Industry
Association, 21 March 2024
196 EU’s new AI Act risks hampering innovation, warns Emmanuel Macron, Financial Times, 11 December 2023
197 Q143
198 Liability rules for Artificial Intelligence, European Commission, 28 September 2022
199 What can you expect from the new product liability directive? Covington, 14 March 2024
Governance of artificial intelligence (AI) 35

China
120. China ranked second in the latest Tortoise Global AI Index, behind the United States.²⁰⁰
In a submission to our inquiry Dr Steve Rolf, a research fellow at the University of Sussex,
told us that China wanted to increase the size of its digital economy:

… to 10 per cent of GDP by 2025 and to this end commercial AI applications


have been successfully embedded in areas like smart cities programs,
education and healthcare, and autonomous vehicles.²⁰¹

121. Statistics released by the Chinese authorities showed a notable increase in imports
of data processing equipment and computer chips in the first quarter of 2024,²⁰² and it
has been reported that the Chinese state has offered significant subsidies to encourage
the development of AI start-ups whilst maintaining close oversight of the approval of new
models for release.²⁰³

122. Matt Sheehan, a fellow at the Carnegie Endowment for International Peace, has
described China as being “… in the midst of rolling out some of the world’s earliest and
most detailed regulations governing AI”.²⁰⁴ He has termed China’s approach to AI as the
first example of its authorities “… having to do a trade-off between two Communist party
goals of sustaining AI leadership and controlling information”.²⁰⁵

123. From a geopolitical perspective, Dr Rolf’s submission to our inquiry described China’s
overall regulatory approach as “… substantially driven by rivalry with the United States
for technological supremacy in digital technology and AI”.²⁰⁶ Shortly after the publication
of our interim Report, the Government confirmed that China had been invited to the
AI Safety Summit at Bletchley Park, with Foreign Secretary Rt. Hon. James Cleverly MP
describing a strategy “… to engage [with China] where it is in the U.K.’s national interest”.²⁰⁷

124. China was among the signatories to the Bletchley Declaration announced at the 2023
AI Safety Summit,²⁰⁸ and has subsequently engaged with the United States on AI-related
topics at researcher, business and diplomatic levels, according to media reports.²⁰⁹ We will
return to the International Coordination Challenge in Chapter 8 of this Report.

200 The Global AI Index, Tortoise, accessed 23 May 2024


201 Dr Steve Rolf (GAI0104)
202 China’s trade returns to growth on back of AI equipment imports, Financial Times, 9 May 2024
203 China offers AI computing ‘vouchers’ to its underpowered start-ups, Financial Times, 4 March 2024
204 China’s AI Regulations and How They Get Made, Carnegie Endowment for International Peace, 10 July 2023, p. 3
205 China to lay down AI rules with emphasis on content control, Financial Times, 11 July 2023
206 Dr Steve Rolf (GAI0104)
207 UK foreign secretary confirms China invite to AI summit, Politico, 19 September 2023
208 The Bletchley Declaration by Countries Attending the AI Safety Summit, 1–2 November 2023, GOV.UK, 1
November 2023
209 US companies and Chinese experts engaged in secret diplomacy on AI safety, Financial Times, 11 January 2024;
White House science chief signals US-China co-operation on AI safety, Financial Times, 25 January 2024; US and
China to hold first talks to reduce risk of AI ‘miscalculation’, Financial Times, 13 May 2024
36 Governance of artificial intelligence (AI)

International standards
125. Both the UK and EU have emphasised the importance of technical industry
standards. The European Commission has described the AI Act as a framework that
“… leaves the concrete technical solutions and operationalisation primarily to industry-
driven standards”.²¹⁰

126. Similarly, in its response to the AI White Paper consultation the Government said that
it would continue to support UK participation in standards fora “… to both leverage the
benefits of global technical standards here in the UK and deliver global digital technical
standards shaped by democratic values”.²¹¹

127. In a submission to our inquiry, the British Standards Institution (BSI), which
represents the UK view on European and international technical standards organisations,
said that “… standards and accreditation offer a business and consumer friendly alternative
to regulation and enable interoperability in complex supply-chains”, by “… underpinning
high-level regulatory objectives with technical or framework specificities that can be
adopted by businesses”.²¹²

128. The BSI said that the UK should remain an active participant in international standards
fora, “… to learn from the approach of others, ensure alignment where possible, and to
influence others in adopting a use-case -specific approach to risk based AI regulation”.²¹³
An AI Standards Hub has been established by the BSI, the Alan Turing Institute and
National Physical Laboratory; with support from DSIT “… to support stakeholders to
understand and engage with AI standardisation and strengthen AI governance practices
domestically and internationally”.²¹⁴

129. In our interim Report we highlighted moves by both the United States and
European Union to develop their own approaches to AI governance. The subsequent
White House Executive Order and the EU AI Act are clear attempts to secure
competitive regulatory advantage.

130. It is true that the size of both the United States and European Union markets
may mean that ‘the Washington effect’ and ‘Brussels effect’—referring to the de facto
standardising of global regulatory approaches, potentially to the detriment of the
UK’s distinct approach—will apply to AI governance. Nevertheless, the distinctiveness
of the UK’s approach and the success of the AI Safety Summit have underlined the
significance of its current and future role.

131. Both the US and EU approaches to AI governance have their downsides. The scope
of the former only imposes a requirement on Federal bodies and relies on voluntary
commitments from leading developers. The latter has been criticised for its top-down,
prescriptive approach and the potential for uneven implementation across different
member states.

210 Artificial intelligence: questions and answers, European Commission, 12 December 2023
211 A pro-innovation approach to AI regulation: Government response to consultation, CP 1019, Department for
Science, Innovation and Technology, 6 February 2024, p. 52
212 British Standards Institution (GAI0028)
213 British Standards Institution (GAI0028)
214 Use of artificial intelligence in Government, National Audit Office, HC 612, p. 31; About the AI Standards Hub, AI
Standards Hub, accessed 23 May 2024
Governance of artificial intelligence (AI) 37

132. The UK is entitled to pursue an approach that considers developments in other


jurisdictions but does not unthinkingly replicate them. However, where there are lessons
to be learned from other jurisdictions, the next Government should be willing to apply
them.

133. The UK has a long history of encouraging technological innovation by offering


a stable, expert regulatory environment coupled with clear industry standards. The
current Government is therefore right to have encouraged the growth of a strong AI
sector in the UK, engaged with leading developers through the AI Safety Institute and
future Summits, and participated in international standards fora. This international
agenda should be continued by the next Government, and coupled with the swift
establishment of a domestic framework that sufficiently addresses the Twelve Challenges
of AI Governance highlighted in our interim Report.
38 Governance of artificial intelligence (AI)

8 Twelve Challenges of AI Governance


revisited
134. In our interim Report, published in August 2023, we identified a sense in many
jurisdictions, including the UK, “… that the pace of development of AI requires an urgent
response from policymakers if the public interest is not to be outstripped by the pace of
deployment”.²¹⁵ We also found that the process of determining a coherent policy response
to the increasing prevalence of AI was being complicated by “… the reality that the optimal
responses to all of the challenges AI gives rise to are not always—at this stage—obvious”.²¹⁶

135. Our interim Report provided an initial contribution to these important discussions.
It identified Twelve Challenges that AI governance frameworks must meet and called on
the Government and regulators to address them through the UK’s approach.²¹⁷

136. Although progress has been made in the UK and other jurisdictions, we believe that
the Twelve Challenges identified in our interim Report still apply. In this Chapter we
suggest potential solutions to each of them. These should not be viewed as fully-formed
policy responses, but as representing this Committee’s ‘starter for ten’ in a complex,
rapidly developing area.

1: The Bias Challenge


Developers and deployers of AI models and tools must not merely acknowledge the
presence of inherent bias in datasets, they must take steps to mitigate its effects

137. Our interim Report highlighted how developers and researchers relied on data to
test, train, operate and refine AI models and tools,²¹⁸ and that these datasets contained
inherent bias.²¹⁹

138. The Government has acknowledged that AI can “… entrench bias and discrimination”
and said that it “… is working closely with the Equality and Human Rights Commission
and ICO [Information Commissioner’s Office] to develop new solutions to address bias
and discrimination in AI systems”.²²⁰

139. Other public bodies have also taken steps to address the Bias Challenge. The Office
of the Police Chief Scientific Adviser has produced a Covenant for Using AI in Policing,²²¹
whilst the Metropolitan Police has commissioned independent testing and analysis of the
performance its facial recognition tool, procured from NEC, by the National Physical

215 Science, Innovation and Technology Committee, Ninth Report of Session 2022–23, The Governance of Artificial
Intelligence: interim report, HC 1769, para 42
216 Science, Innovation and Technology Committee, Ninth Report of Session 2022–23, The Governance of Artificial
Intelligence: interim report, HC 1769, para 42
217 Science, Innovation and Technology Committee, Ninth Report of Session 2022–23, The Governance of Artificial
Intelligence: interim report, HC 1769, Chapter 4
218 Creative Commons (GAI0015), Institution of Engineering and Technology (GAI0021)
219 Q64
220 A pro-innovation approach to AI regulation: Government response to consultation, CP 1019, Department for
Science, Innovation and Technology, 6 February 2024, p. 20
221 Covenant for Using Artificial Intelligence in Policing, National Police Chiefs Council, 28 September 2023
Governance of artificial intelligence (AI) 39

Laboratory.²²² In the field of health, the Medicines and Healthcare products Regulatory
Agency has said that it will require approval applications for new medical devices, many
of which are AI-assisted, to detail how they will address bias.²²³

140. AI can entrench and accelerate existing biases. The current Government, future
administrations and sectoral regulators should require deployers of AI models and
tools to submit them to robust, independent testing and performance analysis prior to
deployment.

141. Model developers and deployers should be required to summarise what steps they
have taken to account for bias in datasets used to train models, and to statistically
report on the levels of bias present in outputs produced using AI tools. This data should
be routinely disclosed in a similar way to company pay gap reporting.

2: The Privacy Challenge


Privacy and data protection frameworks must account for the increasing capability and
prevalence of AI models and tools, and ensure the right balance is struck

142. Our interim Report highlighted the need to balance the protection of privacy and the
potential benefits to be gained from the deployment of AI, particularly in sectors such as
law enforcement.²²⁴

143. Information Commissioner John Edwards told us that the Information Commissioner’s
Office (ICO), one of the UK’s principal privacy regulators, had paid close attention to the
implications of AI’s increasing prevalence for some time and had sought “… to ensure that
all parts of the supply chain in AI—whether they are developing models, training models
or deploying retail applications of them” were aware of their obligations under existing
data protection regulations.²²⁵

144. In a paper that set out its strategic approach to AI, the ICO said that many of the
risks associated with the deployment of AI “… derive from how data—and specifically
personal data—is used in the development and deployment of AI systems”.²²⁶ It said that
by enforcing data protection law, it would ensure that “organisations who are accountable
for the processing of personal data are expected to identify the risks, mitigate them and be
able to demonstrate how they achieve this”.²²⁷ In sensitive areas such as facial recognition
technology, the ICO has said that “… deployments must be proportionate and strike the
correct balance between privacy intrusion and the purpose they are seeking to achieve”.²²⁸

145. Regulators and deployers should ensure that the right balance is maintained
between the protection of privacy and pursuing the potential benefits of AI.
Determining this balance will depend on the context in which the technology is being
deployed, with reference to the relevant laws and regulations.

222 Facial recognition technology in law enforcement equitability study: final report, National Physical Laboratory, 5
April 2023
223 New action to tackle ethnic and other biases in medical devices, GOV.UK, 12 March 2024
224 Science, Innovation and Technology Committee, Ninth Report of Session 2022–23, The Governance of Artificial
Intelligence: interim report, HC 1769, para 49
225 Q602
226 Regulating AI: the ICO’s Strategic Approach, Information Commissioner’s Office, 30 April 2024, p. 4
227 Regulating AI: the ICO’s Strategic Approach, Information Commissioner’s Office, 30 April 2024, p. 5
228 Regulating AI: the ICO’s Strategic Approach, Information Commissioner’s Office, 30 April 2024, p. 7
40 Governance of artificial intelligence (AI)

146. Sectoral regulators should publish detailed guidance to help deployers of AI strike
the balance between the protection of privacy and securing the technology’s intended
benefits. In instances where regulators determine that this balance has not been met, or
where the relevant laws or regulatory requirements have not been met, it should impose
sanctions or prohibit the use of AI models or tools.

3: The Misrepresentation Challenge


Those who use AI to misrepresent others, or allow such misrepresentation to take place
unchallenged, must be held accountable

147. Our interim Report noted how new AI-assisted tools had significantly expanded
“opportunities for malign actors to ‘pass off’ content as being associated with particular
individuals or organisations when it is in fact confected”.²²⁹ Since then, the extent to
which the Misrepresentation Challenge is present across society and the economy has
been further underlined.

148. The increasing prevalence of AI-assisted tools capable of producing pornographic


‘deepfake’ images and videos targeted at women and girls has been highlighted by Glamour
magazine.²³⁰ Committee members participated in an event in Parliament that discussed
the findings of the Glamour Consent Survey undertaken together with Refuge, a charity
that provides specialist support for women and children experiencing domestic violence,²³¹
and how the powers given to Ofcom, the online safety regulator, under the Online Safety
Act 2023 should be implemented in a way that addresses the increasing threat posed by
online deepfakes.²³²

149. The Government subsequently brought forward an amendment to the Criminal


Justice Bill before Parliament, which would criminalise the creation of sexually explicit
deepfake images without consent, or the installation of equipment to enable someone
to do so.²³³ The amendment was added to the Bill on 15 May,²³⁴ but as our Report was
finalised it was unclear whether its remaining stages would be completed prior to the
dissolution of Parliament.

150. We welcome the Government amendment to the Criminal Justice Bill as a


necessary step towards ensuring the UK’s legal framework reflects the current state
of technological development and protects citizens, primarily women and girls, from
the consequences of AI-assisted misrepresentation, including deepfake pornography.
Should the Bill’s remaining stages fail to be completed prior to the dissolution of
Parliament, the next Government must introduce similar provisions as soon as is
practicable after the General Election.

151. The Misrepresentation Challenge has also become increasingly visible in our politics
as the General Election expected this year approaches.²³⁵ Deepfake audio and video clips
229 Science, Innovation and Technology Committee, Ninth Report of Session 2022–23, The Governance of Artificial
Intelligence: interim report, HC 1769, para 54
230 It’s not just Taylor Swift; all women are at risk from the rise of deepfakes, Glamour, 31 January 2024
231 We asked thousands of GLAMOUR readers about sexual consent, from sexual assault to deepfaking. Here’s what
they said…, Glamour, 28 February 2024
232 How the Online Safety Act will help to protect women and girls, Ofcom, 29 November 2023
233 Government cracks down on ‘deepfake’ creation, GOV.UK, 19 April 2024
234 HC Deb, 15 May 2024, col 374 (Commons Chamber)
235 Rishi Sunak suggests general election in second half of year, BBC News, 4 January 2024
Governance of artificial intelligence (AI) 41

that misrepresented the Leader of the Opposition, Rt. Hon. Sir Keir Starmer MP²³⁶ and the
Mayor of London, Sadiq Khan²³⁷ were widely viewed and circulated on online platforms.
The National Cyber Security Centre, a part of GCHQ, said in its 2023 annual review that:

… rather than presenting entirely new risks, it is AI’s ability to enable existing
techniques which poses the biggest threat. For example: large language
models will almost certainly be used to generate fabricated content, AI-
created hyperrealistic bots will make the spread of disinformation easier
and the manipulation of media for use in deepfake campaigns will likely
become more advanced.²³⁸

152. The Government has established a Defending Democracy Taskforce intended “… to


reduce the threat of foreign interference in our democracy by bringing together a wide
range of expertise across government, the intelligence community and industry”.²³⁹ It
has also cited provisions in existing legislation, such as the Elections Act 2022, National
Security Act 2023 and Online Safety Act 2023 as capable of helping to address the
Misrepresentation challenge.²⁴⁰

153. Some online platforms and technology companies have also accepted their role in
addressing AI-generated content that aims to interfere in the democratic process. At the
February 2024 Munich Security Conference, a group of companies including Google, Meta,
Microsoft, OpenAI, TikTok and X, formerly Twitter, announced “… a set of commitments
to deploy technology countering harmful AI-generated content meant to deceive voters”.²⁴¹

154. The Government and regulatory authorities, informed by the work of the Defending
Democracy Taskforce, should safeguard the integrity of the upcoming General Election
campaign in their approach to the online platforms that host deepfake content which
seeks to exert a malign influence on the democratic process. If these platforms are found
to have been slow to remove such content, or to have facilitated its spread, regulators
must take stringent enforcement action—including holding senior leadership personally
liable and imposing financial sanctions.

155. A cross-Government public awareness campaign should be launched to inform


the public about the growing prevalence of AI-assisted misrepresentation, the potential
consequences, what the Government is doing to address the Challenge, and what steps
individuals can take to protect themselves online.

4: The Access to Data Challenge


Access to data, and the responsible management of it, are prerequisites for a healthy,
competitive and innovative AI industry and research ecosystem

236 Deepfake video shows Keir Starmer promoting an investment scheme, Full Fact, 16 November 2023
237 No evidence clip of Sadiq Khan supposedly calling for ‘Remembrance weekend’ to be postponed is genuine, Full
Fact, 10 November 2023
238 Annual Review 2023: Making the UK the safest place to live and work online, National Cyber Security Centre,
14 November 2023, p. 40
239 A pro-innovation approach to AI regulation: Government response to consultation, CP 1019, Department
for Science, Innovation and Technology, 6 February 2024, pp. 22–23; JCNSS launches inquiry on Defending
Democracy with UK election expected this year, 1 February 2024
240 A pro-innovation approach to AI regulation: Government response to consultation, CP 1019, Department for
Science, Innovation and Technology, 6 February 2024, pp. 22–23
241 Tech Accord to Combat Deceptive Use of AI in 2024 Elections, Munich Security Conference, 16 February 2024
42 Governance of artificial intelligence (AI)

156. Our interim Report highlighted the extent to which AI developers and researchers
alike rely on access to high-quality data, and the potential competition concerns that
increased market consolidation could pose.²⁴² Since then, the Competition and Markets
Authority (CMA) has opened an investigation into the relationship between Microsoft
and OpenAI, specifically whether it “… has resulted in a relevant merger situation and, if
so, the impact that the merger could have on competition in the UK”.²⁴³

157. Sarah Cardell, Chief Executive of the CMA, has observed that leading developers
hold “… strong positions in one or more critical inputs for upstream model development,
while also controlling key access points or routes to market for downstream deployment”,
a situation that has raised concerns “… that they could leverage power up and down the
value chain” to the detriment of free and open competition.²⁴⁴

158. Data is chief among these critical inputs, due to the volume required to train current
AI models. Whilst the banking duty of confidentiality is well-established in the UK,²⁴⁵
Nikhi Rathi, Chief Executive of the Financial Conduct Authority, has highlighted how
“safe data sharing can benefit firms, markets and consumers”.²⁴⁶ Air Street Capital, a UK-
based venture capital firm that invests in AI-first technology and life science companies,
has said that the UK “could consider creating a national data bank… [using] data from the
BBC, government departments, our universities, and other sources”, which could then be
made available to developers.²⁴⁷

159. At the so-called ‘frontier’ of AI a small group of leading developers are responsible
for and accruing significant benefits from the development of advanced models
and tools—thanks in part to their ability to access the necessary training data. This
potential dominance is arguably to the detriment of free and open competition.

160. As the regulator responsible for promoting competitive markets and tackling
anti-competitive behaviour, the CMA should identify abuses of market power and
use its powers to stop them. This could take the form of levying fines or requiring the
restructuring of proposed mergers.

161. AI models and tools rely on access to high-quality input data. The phrase ‘garbage
in, garbage out’ is not new, but it is particularly applicable to AI. The potential for
human error and bias notwithstanding, deployers should not solely rely on outputs
produced with AI tools to determine their decision-making, particularly in areas that
could affect the rights and standing of the individuals or entities concerned, such as
insurance decisions or recruitment. These algorithmic decisions should always be
reviewed and verified by trained humans, and those affected should have the right to
challenge these decisions—a process that should also be human-centred.

242 Science, Innovation and Technology Committee, Ninth Report of Session 2022–23, The Governance of Artificial
Intelligence: interim report, HC 1769, para 57
243 CMA seeks views on Microsoft’s partnership with OpenAI, Competition and Markets Authority, 8 December
2023
244 Opening remarks at the American Bar Association (ABA) Chair’s Showcase on AI Foundation Models, GOV.UK, 11
April 2024
245 ICO statement on banks sharing and gathering personal information, Information Commissioner’s Office, 26
July 2023
246 Navigating the UK’s Digital Regulation Landscape: Where are we headed? Financial Conduct Authority, 22 April
2024
247 The UK LLM opportunity: Air Street at the House of Lords, Air Street Press, 28 September 2023
Governance of artificial intelligence (AI) 43

162. The Government and future administrations should support the emergence of more
AI startups in the UK by ensuring they can access the high-quality datasets they need
to innovate. This could involve facilitating access to anonymised public data from data.
gov.uk, the NHS and BBC via a National Data Bank, subject to appropriate safeguards.

5: The Access to Compute Challenge


Democratising and widening access to compute is a prerequisite for a healthy, competitive
and innovative AI industry and research ecosystem

163. As one analysis described, “… data is the raw material that is processed by compute;
put differently, compute is the ‘engine’ fuelled by large amounts of data”.²⁴⁸ Access to both
has become essential to developing and deploying at scale many of the AI tools available
today.²⁴⁹

164. The Government has identified access to compute as key to the further development
of AI-related research and industry in the UK, and has announced the establishment of
an AI Research Resource and a new cluster of supercomputers.²⁵⁰ The 2024 Spring Budget
said that the Government would set out “… how access to the UK’s cutting edge public
compute facilities will be managed, so that both researchers and innovative companies
are able to secure the computing power they need… “.²⁵¹ The Advanced Research and
Invention Agency has also launched a programme aimed at reducing the cost of AI
hardware, Scaling Compute.²⁵²

165. We welcome the Government’s moves to establish a dedicated AI Research


Resource and a cluster of supercomputers but are concerned that it has yet to set out
further details of how researchers and startups will be able to access the compute they
need to maximise the potential benefits of AI across society and the economy.

166. The Government, or its successor administration, should publish an action plan and
proposed deliverables for both the AI Research Resource and its cluster of supercomputers,
and further details of the terms under which researchers and innovative startups will be
able to access them. It should also undertake a feasibility study into the establishment of
a National Compute Cluster that could be made available to researchers and startups.

6: The Black Box Challenge


We should accept that the workings of some AI models are and will remain unexplainable
and focus instead on interrogating and verifying their outputs

248 Computing Power and the Governance of Artificial Intelligence, Girish Sastry, Lennart Heim, Haydn Belfield,
Markus Anderljung, Miles Brundage, Julian Hazell, Cullen O’Keefe, Gillian K. Hadfield, Richard Ngo, Konstantin
Pilz, George Gor, Emma Bluemke, Sarah Shoker, Janet Egan, Robert F. Trager, Shahar Avin, Adrian Weller, Yoshua
Bengio, Diane Coyle, 14 February 2024, p. 7
249 Computing Power and the Governance of Artificial Intelligence, Girish Sastry, Lennart Heim, Haydn Belfield,
Markus Anderljung, Miles Brundage, Julian Hazell, Cullen O’Keefe, Gillian K. Hadfield, Richard Ngo, Konstantin
Pilz, George Gor, Emma Bluemke, Sarah Shoker, Janet Egan, Robert F. Trager, Shahar Avin, Adrian Weller, Yoshua
Bengio, Diane Coyle, 14 February 2024, p. 10
250 Technology Secretary announces investment boost making British AI supercomputing 30 times more powerful,
GOV.UK, 1 November 2023
251 Spring Budget 2024, HC 560, HM Treasury, 6 March 2024, p. 61
252 Scaling Compute, Advanced Research and Invention Agency, accessed 23 May 2024
44 Governance of artificial intelligence (AI)

167. Our interim Report described how new AI models and tools “… have increasingly
become ‘black boxes’, that is, their decision-making processes are not explainable”.²⁵³
“Appropriate transparency and explainability” was one of the five high-level principles set
out in the March 2023 AI White Paper.²⁵⁴

168. The emergence of large language models (LLMs) in particular has encapsulated the
Black Box Challenge. A paper prepared by an expert group to inform discussions at of
the AI Seoul Summit in May 2024 described how even “… researchers currently cannot
generate human-understandable accounts of how general-purpose AI models and systems
arrive at outputs and decisions”.²⁵⁵

169. In a Report examining LLMs, the House of Lords Communications and Digital
Committee described them as “… very complex and poorly understood; [they] operate
blackbox decisionmaking; datasets are so large that meaningful transparency is difficult
… ”.²⁵⁶

170. The Black Box Challenge is one of the most paradigm-shifting consequences of
AI, as it upends our well-established reliance on explainability and understanding.
Given the complexity of currently available and in all likelihood future models, the
starting point should be an acknowledgement how little we can understand about how
many AI models produce their outputs, an acceptance that new ways of thinking will
be required, and a regulatory approach that accounts for the impossibility of total
explainability.

171. The regulators charged with implementing the Government’s high-level AI


governance principles should, in their approach to these models and tools, prioritise
testing and verifying their outputs, as well seeking to establish—whilst accepting the
difficulty of doing so with absolute certainty—how they arrived at them.

7: The Open-Source Challenge


The question should not be ‘open’ or ‘closed’, but rather whether there is a sufficiently
diverse and competitive market to support the growing demand for AI models and tools

172. Our interim Report found differing views as to whether the code, training data and
weights of AI models should be freely available.²⁵⁷ We referred to this as the Open-Source
Challenge.

173. Since the publication of our interim Report, these debates have intensified. Leading
developers such as Google and OpenAI have made their most advanced AI models

253 Science, Innovation and Technology Committee, Ninth Report of Session 2022–23, The Governance of Artificial
Intelligence: interim report, HC 1769, para 61
254 A pro-innovation approach to AI regulation, CP 815, Department for Science, Innovation and Technology, 29
March 2023, p. 6
255 International Scientific Report on the Safety of Advanced AI: Interim Report, Department of Science, Innovation
and Technology, 17 May 2024, p. 83
256 House of Lords Communications and Digital Committee, First Report of Session 2023–24, Large Language
Models and Generative AI, HL Paper 54, para 159
257 Science, Innovation and Technology Committee, Ninth Report of Session 2022–23, The Governance of Artificial
Intelligence: interim report, HC 1769, para 65–67
Governance of artificial intelligence (AI) 45

proprietary,²⁵⁸ whilst Meta has differentiated itself by making its Llama 2 and Llama 3
models more openly available—although, as TechCrunch has noted, in a move designed
to safeguard against competitors Meta does not permit other developers to use Llama to
train generative AI models, whilst app developers with over 700 million monthly users are
required to seek a commercial licence, which Meta could refuse.²⁵⁹

174. In its Report examining LLMs, the House of Lords Communications and Digital
Committee said that “the UK has particular strengths in mid-tier businesses and will benefit
most from a combination of open and closed source technologies”.²⁶⁰ The importance of
having a range of ‘open’ and ‘closed’ models available has also been underlined by the
Competition and Markets Authority.²⁶¹

175. The open-source approach has underpinned many technological breakthroughs,


including the Internet and AI. Whilst some providers of products and services, such
as AI models and their applications, will want to keep elements of their offerings
proprietary, a healthy AI marketplace should be sufficiently diverse to support both
‘open’ and ‘closed’ options. The volume of investment flowing into AI developers of all
types of models, rather than one or the other, is evidence of this market diversity.

176. When procuring AI models for deployment in the public sector the Government
and public bodies should utilise those best suited to the task.

177. The Secretary of State for Science, Innovation and Technology told us that the debate
associated with the Open-Source Challenge should be more nuanced:

… our concerns should be around the capability. Sometimes, depending on


the capability, you will be less concerned if it is open source. If the capability
presents more risk, you will be more concerned.²⁶²

178. These risks include those detailed in a submission to our Inquiry by the Internet Watch
Foundation (IWF), which highlighted the use of AI tools—mostly but not exclusively
open-source—to generate child sexual abuse imagery. The IWF recommended “… greater
regulatory oversight of open-source models and the data sets they are built on before they
are released”, and the insertion of safeguards by developers to prevent such content being
generated with their proprietary models.²⁶³

179. The Government has said that “pre-deployment testing could inform the deployment
options available for a model and change the risk prevention steps required of organisations
prior to the model’s release”. It has also committed to engaging closely with the open-
source community and experts on the question of open release.²⁶⁴

258 OpenAI co-founder on company’s past approach to openly sharing research: ‘We were wrong’, The Verge, 15
March 2023; Google Gemma: because Google doesn’t want to give away Gemini yet, The Verge, 21 February
2024
259 Meta releases Llama 3, claims it’s among the best open models available, TechCrunch, 18 April 2024
260 House of Lords Communications and Digital Committee, First Report of Session 2023–24, Large Language
Models and Generative AI, HL Paper 54, para 40
261 AI Foundation Models update paper, Competition and Markets Authority, 11 April 2024, p. 23
262 Q806
263 Internet Watch Foundation (GAI0130)
264 A pro-innovation approach to AI regulation: Government response to consultation, CP 1019, Department for
Science, Innovation and Technology, 6 February 2024, pp. 33–34
46 Governance of artificial intelligence (AI)

180. The Government should in its response to this Report tell us how it will ensure law
enforcement and regulators are adequately resourced to respond to the growing use of
AI models and tools to generate and disseminate harmful and illegal content.

8: The Intellectual Property and Copyright Challenge


The Government should broker a fair, sustainable solution based around a licensing
framework governing the use of copyrighted material to train AI models

181. Our interim Report detailed concerns about “the ‘scraping’ of copyrighted
content from online sources without permission” in order to train AI models,²⁶⁵ and
representatives from the creative industries told us that they hoped to reach a mutually
beneficial solution with the AI sector, potentially in the form of a licensing framework for
the use of copyrighted content to train models and tools.²⁶⁶

182. In the summer of 2023 the Intellectual Property Office (IPO), an executive agency
of the Government, convened a working group comprised of representatives from the
technology, creative and research sectors, with a view to agreeing a voluntary code of
practice on copyright and AI.²⁶⁷

183. In subsequent months the number of relevant legal proceedings in various jurisdictions
has continued to increase, with the ongoing case filed by the New York Times against
OpenAI and Microsoft over their alleged use of copyrighted work among the most high-
profile examples.²⁶⁸

184. In its response to the AI White Paper consultation, the Government confirmed that
the working group convened by the IPO had been unable to agree a code of practice, and
that Ministers from DSIT and the Department for Culture, Media and Sport would bring
forward proposals “… to ensure the workability and effectiveness of an approach that
allows the AI and creative sectors to grow together in partnership”.²⁶⁹ It has subsequently
said that it “… will focus on greater transparency from AI developers and ensure that AI
outputs are properly attributed”, in co-operation with international counterparts.²⁷⁰

185. The growing volume of litigation relating to alleged use of works protected by
copyright to train AI models and tools, and the value of high-quality data needed
to train future models, has underlined the need for a sustainable framework that
acknowledges the inevitable trade-offs and establishes clear, enforceable rules of the
road. The status quo allows developers to potentially benefit from the unlimited, free
use of copyrighted material, whilst negotiations are stalled.

186. The current Government, or its successor administration, should ensure that
discussions regarding the use of copyrighted works to train and run AI models are

265 Science, Innovation and Technology Committee, Ninth Report of Session 2022–23, The Governance of Artificial
Intelligence: interim report, HC 1769, para 68
266 Q350
267 The government’s code of practice on copyright and AI, GOV.UK, 29 June 2023
268 The Times Sues OpenAI and Microsoft Over A.I. Use of Copyrighted Work, The New York Times, 27 December
2023
269 A pro-innovation approach to AI regulation: Government response to consultation, CP 1019, Department for
Science, Innovation and Technology, 6 February 2024, p. 19
270 Department for Culture, Media and Sport, Council for Science and Technology Report: Harnessing Research and
Development (R&D) in the UK Creative Industries, April 2024, p. 4
Governance of artificial intelligence (AI) 47

concluded and an implementable approach agreed. It seems inevitable that this will
involve the agreement of a financial settlement for past infringements by AI developers,
the negotiation of a licensing framework to govern future uses, and in all likelihood
the establishment of a new authority to operationalise the agreement. If this cannot be
achieved through a voluntary approach, it should be enforced by the Government, or its
successor administration, in co-operation with its international partners.

9: The Liability Challenge


Determining liability for AI-related harms is not just a matter for the courts—
Government and regulators can play a role too

187. Our interim Report discussed the “increasingly complex and international supply
chains for AI models and tools”, and the Challenge this created in terms of the distribution
of liability for harms.²⁷¹ In its response to the AI White Paper consultation, the Government
identified “… how to allocate liability across the supply chain” as one of the key questions
that policymakers would have to answer through their regulatory approaches to AI.²⁷²

188. The Government has acknowledged that many respondents to its AI White Paper
consultation “… endorsed further government intervention to ensure the fair and effective
allocation of liability”.²⁷³ It has said that it will seek expert input on whether updates to
civil or criminal liability frameworks are required to account for the continued emergence
of advanced AI models and tools.²⁷⁴ This is expected to inform the decision on whether to
introduce AI-specific legislation discussed in Chapter 3.

189. Nobody who uses AI to inflict harm should be exempted from the consequences,
whether they are a developer, deployer, or intermediary. The next Government together
with sectoral regulators should publish guidance on where liability for harmful uses of
AI falls under existing law. This should be a cross-Government undertaking. Sectoral
regulators should ensure that guidance on liability for AI-related harms is made
available to developers and deployers as and when it is required. Future administrations
and regulators should also, where appropriate, establish liability via statute rather than
simply relying on jurisprudence.

10: The Employment Challenge


Education is the primary tool for policymakers to respond to the growing prevalence of
AI, and to ensure workers can ask the right questions of the technology

190. Our interim Report detailed different perspectives on the impact of AI on


employment and found that whatever the outcome, planning ahead should be the key
task for policymakers.²⁷⁵
271 Science, Innovation and Technology Committee, Ninth Report of Session 2022–23, The Governance of Artificial
Intelligence: interim report, HC 1769, para 72–74
272 A pro-innovation approach to AI regulation: Government response to consultation, CP 1019, Department for
Science, Innovation and Technology, 6 February 2024, pp. 25–26
273 A pro-innovation approach to AI regulation: Government response to consultation, CP 1019, Department for
Science, Innovation and Technology, 6 February 2024, p. 57
274 A pro-innovation approach to AI regulation: Government response to consultation, CP 1019, Department for
Science, Innovation and Technology, 6 February 2024, p. 34
275 Science, Innovation and Technology Committee, Ninth Report of Session 2022–23, The Governance of Artificial
Intelligence: interim report, HC 1769, para 75–77
48 Governance of artificial intelligence (AI)

191. Economists Daron Acemoglu and Simon Johnson have argued that policymakers
“… must recognise that there is no singular, inevitable path of development for new
technology”. The key question, in their view, should be “… what policies would put AI
development on the right path, with greater focus on enhancing what all workers can
do?”.²⁷⁶

192. The Government has acknowledged the importance of preparing UK workers for
an AI-enabled economy, from both a skills and employment rights perspective. It has
argued that whilst the impact of AI will be felt differently in different sectors, “… we can
be confident that we will need new AI-related skills through national qualifications and
training provision”.²⁷⁷

193. AI is already changing the nature of work, and as the technology evolves this
process is likely to accelerate, placing some jobs at risk. At the same time, there are
productivity benefits to be won, provided people are equipped with the skills to
fruitfully utilise AI. This is a process that should begin in the classroom, and through
greater prioritisation of initiatives such as the Lifetime Skills Guarantee and digital
Skills Bootcamps.

194. The current Government, or its successor, should commission a review into the
possible future skills and employment consequences of AI, along the lines of the 2017
Taylor Review of modern working practices which examined the landscape, suggested
ideas for debate and has resulted in legislative change. It should also in its response to
this Report tell us how it will ensure workers whose jobs are at risk of automation will be
able to retrain and acquire the skills necessary to change careers.

11: The International Coordination Challenge


A global governance regime for AI may not be realistic nor desirable, even if there are
economic and security benefits to be won from international co-operation

195. The global nature of AI governance debates were highlighted in our interim Report,²⁷⁸
and have since been underlined by the proliferation of international-level initiatives to
establish a consensus around governance frameworks for the development and deployment
of the technology.

196. Notable among these was the inaugural AI Safety Summit organised at Bletchley
Park and discussed in Chapter 6 of this Report. The UK has also participated in ongoing
initiatives by the Council of Europe, G7, G20, Organisation for Economic Co-operation
and Development and United Nations; and signed a number of bilateral partnerships
involving joint work on AI and AI governance.²⁷⁹

276 Daron Acemoglu, Simon Johnson: Rebalancing AI, International Monetary Fund, 30 November 2023
277 A pro-innovation approach to AI regulation: Government response to consultation, CP 1019, Department for
Science, Innovation and Technology, 6 February 2024, p. 18
278 Science, Innovation and Technology Committee, Ninth Report of Session 2022–23, The Governance of Artificial
Intelligence: interim report, HC 1769, para 78–81
279 A pro-innovation approach to AI regulation: Government response to consultation, CP 1019, Department for
Science, Innovation and Technology, 6 February 2024, pp. 37–39; G7 nations to harness AI and innovation to
drive growth and productivity, GOV.UK, 15 March 2024; UK & United States announce partnership on science of
AI safety, GOV.UK, 2 April 2024
Governance of artificial intelligence (AI) 49

197. At the AI Seoul Summit in 2024, a group of jurisdictions including the UK, United
States and European Union, announced that they would collaborate more closely on matters
relating to AI safety, innovation and inclusivity. The Seoul Declaration recognised “… the
importance of interoperability between AI governance frameworks in line with a risk-
based approach to maximize the benefits and address the broad range of risks from AI”.²⁸⁰
A Ministerial communique published at the conclusion of the Summit confirmed that
27 nations had agreed to “develop shared risk thresholds for frontier AI development
and deployment… as part of a wider push to develop global standards to address
specific AI risks”.²⁸¹

198. The current Government has said that it sees value in pursuing “… coherence between
our AI governance frameworks to ensure that businesses can operate effectively in both
the UK and wider global markets”.²⁸²

199. However, since our interim Report the race between different jurisdictions to secure
competitive advantage, often underpinned by geopolitical calculations, has become
increasingly visible. It is notable for example that Peng Xiao, the Chief Executive of G42,
a leading developer based in the United Arab Emirates, told the Financial Times in
December 2023 that it would no longer use hardware suppliers from China, in order to
maintain and deepen its links with United States-based partners including Microsoft and
OpenAI.²⁸³

200. A subsequent $1.5 billion investment by Microsoft in G42 was announced in April
2024, and described by the Financial Times as “… part of Washington’s efforts to achieve
supremacy over Beijing in the development of artificial intelligence and other sensitive
technologies”.²⁸⁴ G42 was among the developers to sign on to the voluntary commitments
announced at the May 2024 AI Seoul Summit.²⁸⁵

201. In a national security context, the North Atlantic Treaty Organisation (NATO)
adopted an AI Strategy in October 2021.²⁸⁶ It sets out six principles for the responsible use
of AI and has been operationalised by the piloting of AI in NATO areas of operation “…
as diverse as cyber defence, climate change and imagery analysis”.²⁸⁷

202. We welcome the organisation of the AI Safety Summit at Bletchley Park and
commend the Government on bringing many key actors together. We look forward
to subsequent Summits and hope that the consensus and momentum delivered at
Bletchley Park can be maintained.

203. However, looking beyond the AI safety discussion, we do not believe that
harmonisation for harmonisation’s sake should be the end goal of international AI

280 Seoul Declaration for safe, innovative and inclusive AI by participants attending the Leaders’ Session: AI
Seoul Summit, 21 May 2024, Department for Science, Innovation and Technology, 21 May 2024. The signatory
jurisdictions were Australia, Canada, the European Union, France, Germany, Italy, Japan, the Republic of Korea,
the Republic of Singapore, the United Kingdom, and the United States
281 New commitment to deepen work on severe AI risks concludes AI Seoul Summit, GOV.UK, 22 May 2024
282 A pro-innovation approach to AI regulation: Government response to consultation, CP 1019, Department for
Science, Innovation and Technology, 6 February 2024, pp. 39
283 UAE’s top AI group vows to phase out Chinese hardware to appease US, Financial Times, 7 December 2023
284 US seeks alliance with Abu Dhabi on artificial intelligence, Financial Times, 20 April 2024
285 Frontier AI Safety Commitments: AI Seoul Summit 2024, Department for Science, Innovation and Technology, 21
May 2024
286 An Artificial Intelligence Strategy for NATO, NATO Review, 25 October 2021
287 NATO starts work on Artificial Intelligence certification standard, NATO, 7 February 2023
50 Governance of artificial intelligence (AI)

governance discussions. A degree of distinction between different regulatory regimes


is, in our view, inevitable. Such distinction may be motivated by geopolitics, but it may
also simply be a case of healthy economic competition.

204. Future AI Safety Summits must focus on the establishment of international dialogue
mechanisms to address current, medium- and longer-term safety risks presented by the
growing use of AI; and the sharing of best practice to ensure its potential benefits are
realised in all jurisdictions. This should not set us on the road to a global AI governance
regime—we are unconvinced that such a prospect is either realistic or desirable.

12: The Existential Challenge


Existential AI risk may not be an immediate concern but it should not be ignored, even
if policy and regulatory activity should primarily focus on the here and now

205. Our interim Report highlighted debates over the security implications of AI’s
increasing prevalence, and over the existential risks that it may or may not pose.²⁸⁸ These
were described in the March 2023 AI White Paper by the Government as “… high impact
but low probability”.²⁸⁹

206. Since the publication of the AI White Paper and our interim Report, the Government
has highlighted the most serious potential risks associated with the advancing capability
of AI, through the establishment of the AI Safety Institute and the organisation of the
AI Safety Summit, discussed in Chapter 6 of this Report. The AI Safety Institute has
since said that its testing work will include examination of the potential for autonomous
systems, which it has defined as those “… that are deployed to act semi-autonomously
in the real world. This includes the ability for these systems to autonomously replicate,
deceive humans and create more powerful AI models”.²⁹⁰

207. A discussion paper written to inform discussions at the inaugural AI Safety Summit
and published by the Government described ‘loss of control’ risks posed by advanced AI,
and categorised these as:

• humans increasingly hand over control of important decisions to AIs. It becomes


increasingly difficult for humans to take back control; and

• AI systems actively seek to increase their own influence and reduce human
control.²⁹¹

208. A subsequent paper prepared ahead of the AI Seoul Summit in May 2024 concluded
that:

288 Science, Innovation and Technology Committee, Ninth Report of Session 2022–23, The Governance of Artificial
Intelligence: interim report, HC 1769, para 82
289 A pro-innovation approach to AI regulation, CP 815, Department for Science, Innovation and Technology, 29
March 2023, p. 50
290 AI Safety Institute: third progress report, Department for Science, Innovation and Technology, 5 February 2024
291 Capabilities and risks from frontier AI: A discussion paper on the need for further research into AI risk,
Department for Science, Innovation and Technology, 25 October 2023, p. 26
Governance of artificial intelligence (AI) 51

The worst outcomes could see the emergence of risks like large-scale
unemployment, general-purpose AI-enabled terrorism, or even humanity
losing control over general-purpose AI systems. There is no consensus
among experts about how likely these risks are and when they might occur.²⁹²

Professor Stuart Russell, a professor of computer science at the University of California,


Berkeley who contributed to the paper, has spoken of the need for Governments to
agree “… a treaty that compels developers to write an off-switch into future and existing
software”,²⁹³ in light of findings that in some cases “… AI agents could have a tendency to
‘seek power’ by accumulating resources, interfering with oversight processes, and avoiding
being deactivated, because these actions help them achieve their given goals”.²⁹⁴

209. The debate over the existential risk—or lack of it—posed by the increasing
prevalence of AI has attracted significant attention. However, the Government’s initial
assessment, that such existential risks are high impact but low probability, appears to
be accurate. Nevertheless, given the potential consequences should risks highlighted
by the AI Safety Institute and other researchers be realised, it is right for Governments
to continue to engage with experts on the issue.

210. When implementing the principles set out in the AI White Paper regulatory activity
should be focused on here-and-now impacts. Assessing and responding to existential
risk should primarily be the responsibility of the UK’s national security apparatus,
supported by the AI Safety Institute.

211. Should the acuteness of existential AI risk be judged to have increased, discussions
regarding the implications and possible response should take place in international
fora, such as AI Safety Summits.

292 International Scientific Report on the Safety of Advanced AI: Interim Report, Department for Science,
Innovation and Technology, 17 May 2024, p. 83
293 Apocalypse a real risk unless smart tech has ‘kill switch’, warns AI godfather, The Times, 24 September 2023
294 International Scientific Report on the Safety of Advanced AI: Interim Report, Department for Science,
Innovation and Technology, 17 May 2024, p. 52
52 Governance of artificial intelligence (AI)

Conclusions and recommendations


Introduction

1. With a General Election approaching we have sought to make this Report


futureproof and believe that our conclusions and recommendations will remain
applicable to future Administrations. It is important that the timing of the General
Election does not stall necessary efforts by the Government, developers and deployers
of AI to increase the level of public trust in a technology that has become a central part
of our everyday lives. (Paragraph 8)

The case for AI

2. If governed appropriately, we believe that AI can deliver on its significant promise,


to complement and augment human activity. The Government has articulated the
case for AI: better public services, high quality jobs and a new era of economic
growth driven by advances in AI capabilities. (Paragraph 22)

3. The Government is right to emphasise the potential societal and economic benefits
to be won from the strategic deployment of AI. However, as our interim Report
highlighted, the challenges are as clear as the potential benefits, and these benefits
cannot be realised without public trust in the technology. (Paragraph 23)

4. The Government should certainly make the case for AI but should equally ensure that
its regulatory framework addresses the Twelve Challenges of AI Governance that we
have identified in our interim Report; and offer potential solutions to in this Report.
(Paragraph 24)

AI-specific legislation

5. The next Government should stand ready to introduce new AI-specific legislation,
should an approach based on regulatory activity, existing legislation and voluntary
commitments by leading developers prove insufficient to address current and potential
future harms associated with the technology. (Paragraph 33)

6. The Government should in its response to this Report provide further consideration of
the criteria on which a decision to legislate will be triggered, including which model
performance indicators, training requirements such as compute power or other factors
will be considered. (Paragraph 34)

7. The next Government should commit to laying before Parliament quarterly reviews
of the efficacy of its current approach to AI regulation, including a summary of
technological developments related to its stated criteria for triggering a decision to
legislate, and an assessment whether these criteria have been met. (Paragraph 35)

The role of regulators

8. We welcome confirmation that the Government will undertake a regulatory gap


analysis to determine whether regulators require new powers to respond properly to
Governance of artificial intelligence (AI) 53

the growing use of AI, as recommended in our interim Report. However, as the end
of this Parliament approaches, there is no longer time to bring forward any updates
to current regulatory remits and powers, should they be discovered to be necessary.
This could constrain the ability of regulators to properly implement the Government’s
AI principles and undermine the UK’s overall approach. (Paragraph 40)

9. The next Government should conduct and publish the results its regulatory gap
analysis as soon as is practicable. If the analysis identifies any legislation required to
close regulatory gaps, this should be brought forward in time for it to be enacted as
soon as possible after the General Election. (Paragraph 41)

10. The general-purpose nature of AI will, in some instances, lead to regulatory overlap,
and a potential blurring of responsibilities. This could create confusion on the part
of consumers, developers and deployers of the technology, as well as regulators
themselves. (Paragraph 45)

11. The steering committee that the Government has said it will establish should be
empowered to provide guidance and, where necessary, direction to help regulators
navigate any overlapping remits, whilst respecting the independence of the UK’s
regulators. (Paragraph 45)

12. The regulatory gap analysis being undertaken by the Government should identify, in
consultation with the relevant regulators and co-ordinating entities such as the Digital
Regulation Cooperation Forum and the AI and Digital Regulations Service, areas
where new AI models and tools will necessitate closer regulatory co-operation, given
the extent to which some uses for AI, and some of the challenges these can present—
such as accelerating existing biases—are covered by more than one regulator. The
gap analysis should also put forward suggestions for delivering this co-ordination,
including joint investigations, a streamlined process for regulatory referrals, and
enhanced levels of information sharing. (Paragraph 46)

13. The increasing prevalence and general-purpose nature of AI will create challenges
for the UK’s sectoral regulators, however expert they may be. The AI challenge
can be summed up in a single word: capacity. Ofcom, for example, is combining
implementation of a broad new suite of powers conferred on it by the Online Safety
Act 2023, with formulating a comprehensive response to AI’s deployment across its
wider remit. Others will be required to undertake resource-intensive investigations
and it is vital that they are able, and empowered, to do so. All will be required to pay
greater attention to the outputs of AI tools in their sectors, whilst paying due regard
to existing innovation and growth-related objectives. (Paragraph 55)

14. The announced £10 million to support regulators in responding to the growing
prevalence of AI is clearly insufficient to meet the challenge, particularly when
compared to the UK revenues of leading AI developers. (Paragraph 56)

15. The next Government must announce further financial support, agreed in consultation
with regulators, that is commensurate to the scale of the task. It should also consider
the benefits of a one-off or recurring industry levy, that would allow regulators to
supplement or replace support from the Exchequer for their AI-related activities.
(Paragraph 57)
54 Governance of artificial intelligence (AI)

AI in the public and private sectors

16. AI can be used to increase productivity and augment the contributions of human
workers in both the public and private sectors. We welcome the establishment
of i.AI and the focus on AI deployment set out in the public sector productivity
programme; as well as initiatives to increase business adoption such as the AI and
Digital Hub. (Paragraph 73)

17. The next Government should drive safe adoption of AI in the public sector via i.AI,
the National Science and Technology Council and designated lead departmental
Ministers for AI. (Paragraph 74)

18. In its response to this Report, the Government should confirm the full list of public
sector pilots currently being led or supported by i.AI, the criteria that determined i.AI
pilot project selections, how it intends to evaluate their success and decide whether
to roll them out more widely, and what other pilots are planned for the remainder of
2024. (Paragraph 75)

19. i.AI should undertake an assessment of the existing civil service workforce’s AI
capability, identify areas of the public sector that would benefit the most from the
use of AI and where value for money can be delivered, set out how potential risks
associated with its use should be mitigated, and publish a detailed AI public sector
action plan. Progress against these should be reported to Parliament on an annual
basis and through regular written or oral statements by Ministers. (Paragraph 76)

20. The requirement for Government departments to use the Algorithmic Transparency
Recording Standard should be extended to all public bodies sponsored by Government
departments, from 1 January 2025. (Paragraph 77)

The AI Safety Institute

21. It is a credit to the commitment of those involved that the AI Safety Institute has
been swiftly established, with an impressive and growing team of researchers and
technical experts recruited from leading developers and academic institutions.
(Paragraph 80)

22. The next Government should continue to empower the Institute to recruit the talent
it needs. (Paragraph 80)

23. Although the Institute is not a regulator, it has undeniably played a decisive role
in shaping the UK’s regulatory approach to AI. We commend the work of the
Institute and its researchers in facilitating and informing the ongoing international
conversation about AI governance. (Paragraph 89)

24. However, we are concerned by suggestions that the Institute has been unable to
access as-yet unreleased AI models to perform the pre-deployment safety testing it
was set up to undertake. If true, this would undermine the delivery of the Institute’s
mission and its ability to increase public trust in the technology. (Paragraph 90)

25. In its response to this Report, the Government should confirm which models the AI
Safety Institute has undertaken pre-deployment safety testing on, the nature of the
Governance of artificial intelligence (AI) 55

testing, a summary of the findings, whether any changes were made by the model’s
developers as a result, and whether any developers were asked to make changes but
declined to do so. (Paragraph 91)

26. The Government should also confirm which models the Institute has been unable
to secure access to, and the reason for this. If any developers have refused access—
which would represent a contravention of the reported agreement at the November
2023 Summit at Bletchley Park—the Government should name them and detail their
justification for doing so. (Paragraph 92)

The international dimension

27. In our interim Report we highlighted moves by both the United States and European
Union to develop their own approaches to AI governance. The subsequent White
House Executive Order and the EU AI Act are clear attempts to secure competitive
regulatory advantage. (Paragraph 129)

28. It is true that the size of both the United States and European Union markets
may mean that ‘the Washington effect’ and ‘Brussels effect’—referring to the de
facto standardising of global regulatory approaches, potentially to the detriment
of the UK’s distinct approach—will apply to AI governance. Nevertheless, the
distinctiveness of the UK’s approach and the success of the AI Safety Summit have
underlined the significance of its current and future role. (Paragraph 130)

29. Both the US and EU approaches to AI governance have their downsides. The scope
of the former only imposes a requirement on Federal bodies and relies on voluntary
commitments from leading developers. The latter has been criticised for its top-
down, prescriptive approach and the potential for uneven implementation across
different member states. (Paragraph 131)

30. The UK is entitled to pursue an approach that considers developments in other


jurisdictions but does not unthinkingly replicate them. However, where there are
lessons to be learned from other jurisdictions, the next Government should be willing
to apply them. (Paragraph 132)

31. The UK has a long history of encouraging technological innovation by offering a


stable, expert regulatory environment coupled with clear industry standards. The
current Government is therefore right to have encouraged the growth of a strong
AI sector in the UK, engaged with leading developers through the AI Safety
Institute and future Summits, and participated in international standards fora. This
international agenda should be continued by the next Government, and coupled with
the swift establishment of a domestic framework that sufficiently addresses the Twelve
Challenges of AI Governance highlighted in our interim Report. (Paragraph 133)

Twelve Challenges of AI Governance revisited

32. AI can entrench and accelerate existing biases. The current Government, future
administrations and sectoral regulators should require deployers of AI models and
tools to submit them to robust, independent testing and performance analysis prior to
deployment. (Paragraph 140)
56 Governance of artificial intelligence (AI)

33. Model developers and deployers should be required to summarise what steps they have
taken to account for bias in datasets used to train models, and to statistically report
on the levels of bias present in outputs produced using AI tools. This data should be
routinely disclosed in a similar way to company pay gap reporting. (Paragraph 141)

34. Regulators and deployers should ensure that the right balance is maintained between
the protection of privacy and pursuing the potential benefits of AI. Determining
this balance will depend on the context in which the technology is being deployed,
with reference to the relevant laws and regulations. (Paragraph 145)

35. Sectoral regulators should publish detailed guidance to help deployers of AI strike
the balance between the protection of privacy and securing the technology’s intended
benefits. In instances where regulators determine that this balance has not been met,
or where the relevant laws or regulatory requirements have not been met, it should
impose sanctions or prohibit the use of AI models or tools. (Paragraph 146)

36. We welcome the Government amendment to the Criminal Justice Bill as a necessary
step towards ensuring the UK’s legal framework reflects the current state of
technological development and protects citizens, primarily women and girls, from
the consequences of AI-assisted misrepresentation, including deepfake pornography.
(Paragraph 150)

37. Should the Bill’s remaining stages fail to be completed prior to the dissolution of
Parliament, the next Government must introduce similar provisions as soon as is
practicable after the General Election. (Paragraph 150)

38. The Government and regulatory authorities, informed by the work of the Defending
Democracy Taskforce, should safeguard the integrity of the upcoming General Election
campaign in its approach to the online platforms that host deepfake content which
seeks to exert a malign influence on the democratic process. If these platforms are
found to have been slow to remove such content, or to have facilitated its spread,
regulators must take stringent enforcement action—including holding senior
leadership personally liable and imposing financial sanctions. (Paragraph 154)

39. A cross-Government public awareness campaign should be launched to inform the


public about the growing prevalence of AI-assisted misrepresentation, the potential
consequences, what the Government is doing to address the Challenge, and what steps
individuals can take to protect themselves online. (Paragraph 155)

40. At the so-called ‘frontier’ of AI a small group of leading developers are responsible
for and accruing significant benefits from the development of advanced models
and tools—thanks in part to their ability to access the necessary training data. This
potential dominance is arguably to the detriment of free and open competition.
(Paragraph 159)

41. As the regulator responsible for promoting competitive markets and tackling anti-
competitive behaviour, the CMA should identify abuses of market power and use
its powers to stop them. This could take the form of levying fines or requiring the
restructuring of proposed mergers. (Paragraph 160)
Governance of artificial intelligence (AI) 57

42. AI models and tools rely on access to high-quality input data. The phrase ‘garbage
in, garbage out’ is not new, but it is particularly applicable to AI. (Paragraph 161)

43. The potential for human error and bias notwithstanding, deployers should not
solely rely on outputs produced with AI tools to determine their decision-making,
particularly in areas that could affect the rights and standing of the individuals or
entities concerned, such as insurance decisions or recruitment. These algorithmic
decisions should always be reviewed and verified by trained humans, and those
affected should have the right to challenge these decisions—a process that should also
be human-centred. (Paragraph 161)

44. The Government and future administrations should support the emergence of more
AI startups in the UK by ensuring they can access the high-quality datasets they need
to innovate. This could involve facilitating access to anonymised public data from
data.gov.uk, the NHS and BBC via a National Data Bank, subject to appropriate
safeguards. (Paragraph 162)

45. We welcome the Government’s moves to establish a dedicated AI Research Resource


and a cluster of supercomputers but are concerned that it has yet to set out further
details of how researchers and startups will be able to access the compute they
need to maximise the potential benefits of AI across society and the economy.
(Paragraph 165)

46. The Government, or its successor administration, should publish an action plan
and proposed deliverables for both the AI Research Resource and its cluster of
supercomputers, and further details of the terms under which researchers and
innovative startups will be able to access them. It should also undertake a feasibility
study into the establishment of a National Compute Cluster that could be made
available to researchers and startups. (Paragraph 166)

47. The Black Box Challenge is one of the most paradigm-shifting consequences of AI, as
it upends our well-established reliance on explainability and understanding. Given
the complexity of currently available and in all likelihood future models, the starting
point should be an acknowledgement how little we can understand about how many
AI models produce their outputs, an acceptance that new ways of thinking will
be required, and a regulatory approach that accounts for the impossibility of total
explainability. (Paragraph 170)

48. The regulators charged with implementing the Government’s high-level AI governance
principles should, in their approach to these models and tools, prioritise testing and
verifying their outputs, as well seeking to establish—whilst accepting the difficulty of
doing so with absolute certainty—how they arrived at them. (Paragraph 171)

49. The open-source approach has underpinned many technological breakthroughs,


including the Internet and AI. Whilst some providers of products and services, such
as AI models and their applications, will want to keep elements of their offerings
proprietary, a healthy AI marketplace should be sufficiently diverse to support both
‘open’ and ‘closed’ options. The volume of investment flowing into AI developers of
all types of models, rather than one or the other, is evidence of this market diversity.
(Paragraph 175)
58 Governance of artificial intelligence (AI)

50. When procuring AI models for deployment in the public sector the Government and
public bodies should utilise those best suited to the task. (Paragraph 176)

51. The Government should in its response to this Report tell us how it will ensure law
enforcement and regulators are adequately resourced to respond to the growing use
of AI models and tools to generate and disseminate harmful and illegal content.
(Paragraph 180)

52. The growing volume of litigation relating to alleged use of works protected by
copyright to train AI models and tools, and the value of high-quality data needed
to train future models, has underlined the need for a sustainable framework that
acknowledges the inevitable trade-offs and establishes clear, enforceable rules of the
road. The status quo allows developers to potentially benefit from the unlimited,
free use of copyrighted material, whilst negotiations are stalled. (Paragraph 185)

53. The current Government, or its successor administration, should ensure that
discussions regarding the use of copyrighted works to train and run AI models are
concluded and an implementable approach agreed. It seems inevitable that this
will involve the agreement of a financial settlement for past infringements by AI
developers, the negotiation of a licensing framework to govern future uses, and in all
likelihood the establishment of a new authority to operationalise the agreement. If
this cannot be achieved through a voluntary approach, it should be enforced by the
Government, or its successor administration, in co-operation with its international
partners. (Paragraph 186)

54. Nobody who uses AI to inflict harm should be exempted from the consequences,
whether they are a developer, deployer, or intermediary. The next Government
together with sectoral regulators publish guidance on where liability for harmful
uses of AI falls under existing law. This should be a cross-Government undertaking.
Sectoral regulators should ensure that guidance on liability for AI-related harms
is made available to developers and deployers as and when it is required. Future
administrations and regulators should also, where appropriate, establish liability via
statute rather than simply relying on jurisprudence. (Paragraph 189)

55. AI is already changing the nature of work, and as the technology evolves this
process is likely to accelerate, placing some jobs at risk. At the same time, there
are productivity benefits to be won, provided people are equipped with the skills
to fruitfully utilise AI. This is a process that should begin in the classroom, and
through greater prioritisation of initiatives such as the Lifetime Skills Guarantee
and digital Skills Bootcamps. (Paragraph 193)

56. The current Government, or its successor, should commission a review into the possible
future skills and employment consequences of AI, along the lines of the 2017 Taylor
Review of modern working practices which examined the landscape, suggested ideas
for debate and has resulted in legislative change. It should also in its response to this
Report tell us how it will ensure workers whose jobs are at risk of automation will be
able to retrain and acquire the skills necessary to change careers. (Paragraph 194)
Governance of artificial intelligence (AI) 59

57. We welcome the organisation of the AI Safety Summit at Bletchley Park and
commend the Government on bringing many key actors together. We look forward
to subsequent Summits and hope that the consensus and momentum delivered at
Bletchley Park can be maintained. (Paragraph 202)

58. However, looking beyond the AI safety discussion, we do not believe that
harmonisation for harmonisation’s sake should be the end goal of international
AI governance discussions. A degree of distinction between different regulatory
regimes is, in our view, inevitable. Such distinction may be motivated by geopolitics,
but it may also simply be a case of healthy economic competition. (Paragraph 203)

59. Future AI Safety Summits must focus on the establishment of international dialogue
mechanisms to address current, medium- and longer-term safety risks presented by
the growing use of AI; and the sharing of best practice to ensure its potential benefits
are realised in all jurisdictions. This should not set us on the road to a global AI
governance regime—we are unconvinced that such a prospect is either realistic or
desirable. (Paragraph 204)

60. The debate over the existential risk—or lack of it—posed by the increasing
prevalence of AI has attracted significant attention. However, the Government’s
initial assessment, that such existential risks are high impact but low probability,
appears to be accurate. Nevertheless, given the potential consequences should risks
highlighted by the AI Safety Institute and other researchers be realised, it is right
for Governments to continue to engage with experts on the issue. (Paragraph 209)

61. When implementing the principles set out in the AI White Paper regulatory activity
should be focused on here-and-now impacts. Assessing and responding to existential
risk should primarily be the responsibility of the UK’s national security apparatus,
supported by the AI Safety Institute. (Paragraph 210)

62. Should the acuteness of existential AI risk be judged to have increased, discussions
regarding the implications and possible response should take place in international
fora, such as AI Safety Summits. (Paragraph 211)
60 Governance of artificial intelligence (AI)

Formal minutes
Thursday 23 May 2024
Stephen Metcalfe, in the Chair
Chris Clarkson
Dame Tracey Crouch
James Davies
Katherine Fletcher

Draft Report (Governance of artificial intelligence), proposed by the Chair, brought up and
read.

Ordered, That the draft Report be read a second time, paragraph by paragraph.

Paragraphs 1 to 211 read and agreed to.

Summary agreed to.

Resolved, That the Report be the Third Report of the Committee to the House.

Ordered, That the Chair make the Report to the House.

Ordered, That embargoed copies of the Report be made available, in accordance with the
provisions of Standing Order No. 134.

Adjournment

The Committee adjourned.


Governance of artificial intelligence (AI) 61

Witnesses
The following witnesses gave evidence. Transcripts can be viewed on the inquiry publications
page of the Committee’s website.

Wednesday 25 January 2023

Professor Michael Osborne, Professor of Machine Learning and co-founder,


University of Oxford and Mind Foundry; Michael Cohen, DPhil candidate in
Engineering Science, University of Oxford Q1–54

Mrs Katherine Holden, Head of Data Analytics, AI and Digital Identity, techUK;
Dr Manish Patel, CEO, Jiva.ai Q55–96

Wednesday 22 February 2023

Adrian Joseph, Chief Data and AI Officer, BT Group; Jen Gennai, Director,
Responsible Innovation, Google; Hugh Milward, General Manager, Corporate,
External and Legal Affairs, Microsoft UK Q97–143

Professor Dame Wendy Hall, Regius Professor of Computer Science, University


of Southampton; Professor Sir Nigel Shadbolt, Professorial Research Fellow in
Computer Science and Principal, Jesus College, University of Oxford Q144–173

Wednesday 08 March 2023

Professor Andrew Hopkins, Chief Executive, Exscientia Q174–222

Professor Delmiro Fernandez-Reyes, Professor of Biomedical Computing,


University College London, Adjunct Professor of Paediatrics, University of
Ibadjan; Professor Mihaela van der Schaar, John Humphrey Plummer Professor
of Machine Learning, Artificial Intelligence and Medicine, and Director,
Cambridge Centre for AI in Medicine, Cambridge University Q223–262

Wednesday 29 March 2023

Professor Rose Luckin, Professor of Learner Centred Design, University College


London, Director, Educate; Daisy Christodoulou, Director of Education, No
More Marking Q263–294

Dr Matthew Glanville, Head of Assessment Principles and Practice, The


International Baccalaureate; Joel Kenyon, Science Teacher and Community
Cohesion Lead, Dormers Wells High School, Southall, London Q295–326

Wednesday 10 May 2023

Jamie Njoku-Goodwin, CEO, UK Music; Paul Fleming, General Secretary, Equity Q327–373

Coran Darling, Associate, Intellectual Property and Technology, DLA Piper; Dr


Hayleigh Bosher, Senior Lecturer in Intellectual Property Law, Brunel University Q374–411

Wednesday 24 May 2023

Lindsey Chiswick, Director of Intelligence, Metropolitan Police; Dr Tony


Mansfield, Principal Research Scientist, National Physical Laboratory Q412–506
62 Governance of artificial intelligence (AI)

Michael Birtwistle, Associate Director, AI and data law & policy, Ada Lovelace
Institute; Dr Marion Oswald, Senior Research Associate for Safe and Ethical
AI and Associate Professor in Law, The Alan Turing Institute and Northumbria
University Q507–538

Wednesday 25 October 2023

Dame Melanie Dawes, Chief Executive, Ofcom; Will Hayter, Senior Director,
Digital Markets Unit, Competition and Markets Authority (CMA) Q539–601

John Edwards, Information Commissioner, Information Commissioner’s Office;


Kate Jones, Chief Executive, Digital Regulation Cooperation Forum; Jessica
Rusu, Chief Data, Information and Intelligence Officer, Financial Conduct
Authority Q602–652

Wednesday 08 November 2023

Matt Clifford, Prime Minister’s representative, AI Safety Summit; Emran Mian,


Director General, Digital Technologies and Telecoms, Department for Science,
Innovation and Technology Q653–755

Wednesday 13 December 2023

Rt Hon Michelle Donelan MP, Secretary of State, Department for Science,


Innovation and Technology; Sarah Munby, Permanent Secretary, Department
for Science, Innovation and Technology Q756–822
Governance of artificial intelligence (AI) 63

Published written evidence


The following written evidence was received and can be viewed on the inquiry publications
page of the Committee’s website.

GAI numbers are generated by the evidence processing system and so may not be complete.
1 ACT | The App Association (GAI0018)
2 ADS (GAI0027)
3 AI & Digital Healthcare Group, Centre for Regulatory Science and Innovation,
Birmingham (University Hospitals Birmingham NHS Foundation Trust/University of
Birmingham) (GAI0055)
4 AI Centre (GAI0037)
5 AI Governance Limited (GAI0050)
6 Abrusci, Dr Elena (Lecturer , Brunel University London); and Scott, Dr Richard
Mackenzie-Gray (Postdoctoral Fellow, University of Oxford) (GAI0038)
7 Academy of Medical Sciences (GAI0072)
8 Ada Lovelace Institute (GAI0086)
9 Alfieri, Joseph (GAI0062)
10 Alliance for Intellectual Property (GAI0118)
11 Assuring Autonomy International Programme (AAIP), University of York.; McDermid,
Professor John ; Calinescu, Professor Radu ; MacIntosh, Dr Ana ; Habli, Professor
Ibrahim ; and Hawkins, Dr Richard (GAI0044)
12 BCS - Chartered Institute for Information Technology (GAI0022)
13 BILETA (GAI0082)
14 BT Group (GAI0091)
15 Belfield, Mr Haydn (Academic Project Manager, University of Cambridge,
Leverhulme Centre for the Future of Intelligence & Centre for the Study of
Existential Risk); igeartaigh, Dr Seán Ó hÉ (Acting Director and Principal Researcher,
University of Cambridge, Centre for the Study of Existential Risk & Leverhulme
Centre for the Future of Intelligence); Avin, Dr Shahar (Senior Research Associate,
University of Cambridge, Centre for the Study of Existential Risk); ndez-Orallo,
Prof José Herná (Professor, Universitat Politècnica de València); and Corsi, Giulio
(Research Associate, University of Cambridge, Leverhulme Centre for the Future of
Intelligence)) (GAI0094)
16 Big Brother Watch (GAI0088)
17 Bosher, Dr Hayley (Reader in Intellectual Property Law and Associate Dean, Brunel
University) (GAI0128)
18 British Standards Institution (BSI) (GAI0028)
19 Burges Salmon LLP (GAI0064)
20 CBI (GAI0115)
21 CENTRIC (GAI0043)
22 Carnegie UK (GAI0041)
23 Center for AI and Digital Policy (GAI0098)
24 Chiswick, Lindsey (Director of Intelligence, Metropolitan Police) (GAI0121)
64 Governance of artificial intelligence (AI)

25 Clement-Jones, Lord (Digital Spokesperson for the Liberal Democrats, House of


Lords); and Darling, Coran (GAI0101)
26 Cohen, Michael (DPhil Candidate, University of Oxford); and Osborne, Professor
Michael (Professor of Machine Learning, University of Oxford) (GAI0046, GAI0116)
27 Collins, Dr Philippa (Senior Lecturer in Law, University of Bristol); and Atkinson, Dr
Joe (Lecturer in Law, University of Sheffield) (GAI0074)
28 Committee on Standards in Public Life (GAI0110)
29 Competition and Markets Authority (GAI0124)
30 Compliant & Accountable Systems Research Group, Department of Computer
Science & Technology, University of Cambridge; and Compliant & Accountable
Systems Research Group, Department of Computer Science & Technology, University
of Cambridge (GAI0106)
31 Connected by Data (GAI0052)
32 Copyright Alliance (GAI0097)
33 Creative Commons (GAI0015)
34 Crockett, Professor of Computational Intelligence Keeley (Professor of
Computational Intelligence, Manchester Metropolitan University) (GAI0020)
35 DeepMind (GAI0100)
36 Dennis, Bobbie (Policy and Public Affairs Officer, Internet Watch Foundation (IWF))
(GAI0130)
37 Department for Digital, Culture, Media and Sport; and Department for Business,
Energy and Industrial Strategy (GAI0107)
38 Edwards, Professor Rosalind (Professor of Sociology , University of Southampton);
Gillies, Professor Val (Professor of Social Policy and Criminology , University of
Westminster); Gorin, Dr. Sarah (Assistant Professor , University of Warwick); and
Ducasse, Dr. Hélène Vannier (Senior Research Fellow , University of Southampton)
(GAI0035)
39 Electoral Commission (GAI0129)
40 Employers Lawyers Association (GAI0031)
41 Equity (GAI0065)
42 Financial Conduct Authority (GAI0125)
43 Fotheringham, Kit (Postgraduate Researcher, University of Bristol) (GAI0042)
44 GSK (GAI0067)
45 Google (GAI0099)
46 Hopgood, Professor Adrian (Professor of Intelligent Systems, University of
Portsmouth) (GAI0030)
47 Imperial College London Artificial Intelligence Network (GAI0014)
48 Information Commissioner’s Office (ICO) (GAI0112)
49 Institute for the Future of Work (GAI0063)
50 Institute of Physics and Engineering in Medicine (IPEM) (GAI0051)
51 Joshi, Ruchika (Harvard Kennedy School) (GAI133)
Governance of artificial intelligence (AI) 65

52 Leslie, Professor David (Director of Ethics and Responsible Innovation Research, The
Alan Turing Institute; and Professor of Ethics, Technology and Society, Queen Mary
University of London) (GAI0113)
53 Liberty (GAI0081)
54 Library and Archives Copyright Alliance (GAI0120)
55 Loughborough University (GAI0070)
56 Mason, Mr Shane (Freelance consultant, n/a) (GAI0006)
57 Microsoft (GAI0083)
58 Milestone Systems (GAI0132)
59 Minderoo Centre for Technology and Democracy, University of Cambridge (GAI0032)
60 NCC Group (GAI0040)
61 NICE; HRA; MHRA; and CQC (GAI0076)
62 National Physical Laboratory (GAI0053)
63 Ofcom (GAI0126)
64 Oswald, Dr Marion (GAI0012)
65 Oxford Internet Institute (GAI0058)
66 Oxford Internet Institute, University of Oxford; University of Exeter; Oxford Internet
Institute, University of Oxford; Oxford Internet Institute, University of Oxford; and
Oxford Internet Institute, University of Oxford (GAI0024)
67 Patelli, Dr Alina (Senior Lecturer in Computer Science, Aston University) (GAI0095)
68 Protect Pure Maths (GAI0117)
69 Public Law Project (GAI0069)
70 Publishers Association (GAI0102)
71 Pupils 2 Parliament (GAI0096)
72 Queen Mary University London (GAI0073)
73 RELX (GAI0033)
74 Reed, Professor Chris (Professor of Electronic Commerce Law, Centre for Commercial
Law Studies, Queen Mary University of London) (GAI0059)
75 Richie, Dr Cristina (lecturer, TU Delft) (GAI0001, GAI0002)
76 Rolf, Dr Steve (Research Fellow, The Digital Futures at Work (Digit) Centre, University
of Sussex Business School) (GAI0104)
77 Rolls-Royce plc (GAI0109)
78 Sage Group (GAI0108)
79 Salesforce (GAI0105)
80 Sanchez-Graells, Professor Albert (Professor of Economic Law, University of Bristol
Law School) (GAI0004)
81 School of Informatics, University of Edinburgh (GAI0079)
82 Scott, Mr. Michael (Chair of Trustees, Home-Start Nottingham) (GAI0005)
83 Sense about Science (GAI0078)
84 TUC (GAI0060)
66 Governance of artificial intelligence (AI)

85 Tang, Dr Guan H (Senior Lecturer, Centre for Commercial Law Studies, Queen Mary
University of London) (GAI0077)
86 TechWorks (GAI0068)
87 Tessler, Leonardo (PhD in law Candidate, University of Montreal) (GAI0092)
88 The Alliance for Intellectual Property (GAI0103)
89 The Institution of Engineering and Technology (IET) (GAI0021)
90 The LSE Law School, London School of Economics.; and The LSE Law School, London
School of Economics. (GAI0036)
91 The Nutrition Society (GAI0007)
92 The Royal Academy of Engineering (GAI0039)
93 The Royal College of Radiologists (RCR) (GAI0087, GAI0131)
94 Thorney Isle Research (GAI0016)
95 Tripathi, Mr Karan (Research Associate , University of Sheffield); and Tzanou , Dr
Maria (Senior Lecturer in Law, University of Sheffield) (GAI0047)
96 Trustpilot (GAI0054)
97 Trustworthy Autonomous Systems Hub; The UKRI TAS Node in Governance &
Regulation; and The UKRI TAS Node in Functionality (GAI0084)
98 UK BioIndustry Association (GAI0026)
99 UK Dementia Research Institute (GAI0111)
100 UKRI (GAI0114)
101 United Nations Association UK; Article 36; Women’s International League for Peace
and Freedom UK; and Drone Wars UK (GAI0090)
102 University of Glasgow (GAI0057)
103 University of Sheffield (GAI0017)
104 University of Surrey (GAI0075)
105 Wayve (GAI0061)
106 Which? (GAI0049)
107 Whittlestone, Dr Jess (Head of AI Policy, Centre for Long-Term Resilience); and
Moulange, Richard (PhD student , MRC Biostatistics Unit, University of Cambridge)
(GAI0071)
108 Wudel, Alexandra (Political Advisor, German Parliament); Gengler, Eva (PhD Student,
FAU Nürnberg); and Center for Feminist Artificial Intelligence (GAI0013)
109 Wysa Limited (GAI0093)
110 medConfidential (GAI0011)
111 techUK (GAI0045)
112 Zurich UK (GAI0127)
Governance of artificial intelligence (AI) 67

List of Reports from the Committee


during the current Parliament
All publications from the Committee are available on the publications page of the
Committee’s website.

Session 2023–24

Number Title Reference


1st The antimicrobial potential of bacteriophages HC 328
2nd Insect decline and UK food security HC 326
1st Special The governance of artificial intelligence: interim report: HC 248
Government response to the Committee’s Ninth report of
Session 2022–23

Session 2022–23

Number Title Reference


1st Pre-appointment hearing for the Executive Chair of Research HC 636
England
2nd UK space strategy and UK satellite infrastructure HC 100
3rd My Science Inquiry HC 618
4th The role of Hydrogen in achieving Net Zero HC 99
5th Diversity and Inclusion in STEM HC 95
6th Reproducibility and Research Integrity HC 101
7th UK space strategy and UK satellite infrastructure: reviewing HC 1717
the licencing regime for launch
8th Delivering nuclear power HC 626
9th The governance of artificial intelligence: interim report HC 1769

Session 2021–22

Number Title Reference


1st Direct-to-consumer genomic testing HC 94
2nd Pre-appointment hearing for the Chair of UK Research and HC 358
Innovation
3rd Coronavirus: lessons learned to date HC 92

Session 2019–21

Number Title Reference


1st The UK response to covid-19: use of scientific advice HC 136
68 Governance of artificial intelligence (AI)

Number Title Reference


2nd 5G market diversification and wider lessons for critical and HC 450
emerging technologies
3rd A new UK research funding agency HC 778

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy