UK government positions itself away from EU on AI regulation while testing how light touch it can go
The UK government launched a trio of documents concerning AI on 18 July, all with the general purpose of fostering innovation, increasing public trust in the technology and giving clarity to business. But for the detail that will allow this, there is a wait until at least the end of the year when the government will publish its white paper on AI regulation, itself another pause for reflection.
The department responsible is clear on the UK approach differing from that of the EU as regulation will be spread across six bodies rather than one dedicated regulator in the EU, but less clear on what the regulation will be for now other than so light-touch that it may just be guidance.
In the meantime, one of the documents calls for views on regulation over the next 10 weeks.
The National Strategy (not just published)
Last September, a trio of agencies (the Department for Digital, Culture, Media and Sport, the Department for Business, Engergy and Industrial Strategy and the Office for Artificial Intelligence) released the National AI Strategy guidance which promised useful developments such as a transparency standard for AI coding, something of a world-first (subsequently published in December).
The National AI Strategy was a timetable to get everything done to establish the UK as a world-leader in AI, from standards to supply chains, from 3 months to 10 years.
First 10-month review
The newly-released National AI Strategy: AI Action Plan is not so much a plan as a review, as it compares itself to the overarching National Strategy: “the first update of AI activity from across government since the AI Strategy was published.”
It covers a list of achievements such as publishing a further document, policy paper National Data Strategy Mission 1 Policy Framework: Unlocking the value of data across the economy, increased numbers of fellowships for AI researchers and initiatives by multiple departments. It is not clear if these projects are the result of proactive engagement via the National AI Strategy or the collation of research into those departments’ work.
The Action Plan (review) does not mention the impact on AI research in the UK if it does indeed fall out of the €95.5 billion Horizon Europe science funding scheme, with roughly similar aims for AI research.
“Over the next twelve months, government’s focus will be to building on the outline proposals set out in this month’s AI governance policy paper,” concludes the Action Plan.
“Towards the end of the year, a White Paper will set out a pro-innovation approach to govern AI, driving prosperity and building trust in its use. Alongside this, work will continue to develop AI standards for a UK context. We will seek stakeholder input and continue international influence to promote these objectives.”
Second 10-month review
Published the same day is the AI Regulation Policy Paper, with what must surely be AI-generated typography.
Subtitled “Establishing a pro-innovation approach to regulating AI” it is again a review of achievements before getting into white paper territory. For possibilities for AI regulation, there are a list of “cross-sectoral principles.”
“Our ambition is to support responsible innovation in AI – unleashing the full potential of new technologies, while keeping people safe and secure,” writes Kwasi Kwarteng, Secretary of State for Business in his foreword.
“This policy paper sets out how the government intends to strike this balance: by developing a pro-innovation, light-touch and coherent regulatory framework, which creates clarity for businesses and drives new investment.”
Regulation will be “context-specific” with the most relevant regulator taking charge. It will be “pro-innovation and risk-based” so that regulators focus on high-risk concerns rather than low risk. Regulation will be “coherent” which involves regulators working together via the Digital Regulation Cooperation Forum (DRCF), made up of the Competition and Markets Authority (CMA), the Information Commissioner’s Office (ICO), the Office for Communications (Ofcom) and the Financial Conduct Authority (FCA).
“Proportionate and adaptable” based on cross-sectoral principles: “We will ask that regulators consider lighter touch options, such as guidance or voluntary measures, in the first instance. As far as possible, we will also seek to work with existing processes rather than create new ones.”
The cross-sectoral principles are outlined too:
Ensure that AI is used safely: “Ensuring safety in AI will require new ways of thinking and new approaches, however we would expect the requirements to remain commensurate with actual risk – comparable with non-AI use cases.”
Ensure that AI is technically secure and functions as designed.
Make sure that AI is appropriately transparent and explainable: “Achieving explainability of AI systems at a technical level remains an important research and development challenge.”
Embed considerations of fairness into AI.
Define legal persons’ responsibility for AI governance.
Clarify routes to redress or contestability: “Subject to considerations of context and proportionality, the use of AI should not remove an affected individual or group’s ability to contest an outcome. We would therefore expect regulators to implement proportionate measures to ensure the contestability of the outcome of the use of AI in relevant regulated situations.”
Data Protection and Digital Information Bill
Also introduced to the House of Commons and House of Lords on 18 July was the Data Protection and Digital Information Bill to amend the 2018 Data Protection Act post-Brexit.
While not specifically about AI or its regulation, it does set out to significantly change the data protection landscape and regulators’ powers and, even more so, the Secretary of State’s powers at the top.
“We welcome the UK government’s announcement of a pro-innovation approach to regulating AI,” writes Matthew Peake, Global Director of Public Policy, Onfido in an email to Biometric Update. “However, the devil will be in the detail. We need to learn lessons from the EU, where the current approach might subject low-risk uses of AI, such as those designed to combat fraud, to high-risk rules, chilling innovation and reducing consumer protection.”
That would seem to align with the plans so far, as set out in the White Paper.
“Devolving accountability to specific industry regulators and statutory bodies can ensure regulation is governed by experts. But this must not be at the expense of consistency across various sectors where AI plays an important role.”
Onfido appears ready for the consultation.
“To that end, it is vital that as we map out the full extent of AI regulation and rules in the UK, the views of industry are continually sought, and the approach is fully collaborative across companies both big and small.”
The recently-completed consultation by the same department for the Data Protection and Digital Information bill was dubbed “rigged” and questioned over its legality.