MPs at a recent artificial intelligence governance meeting were keen to hear how Ofcom, the FCA and the ICO are preparing for UK AI legislation
The Science, Innovation and Technology Committee recently took evidence from Ofcom and other regulators looking at the governance of artificial intelligence (AI).
The committee’s governance of AI meeting preceded the government’s paper on safety of frontier artificial intelligence (AI) systems and took place just days before an AI Summit at Bletchley Park, to which experts and government representatives from around the world have been invited to broaden the debate on AI regulations.
Regulator readiness
Ofcom sees part of its role as the regulator for online safety and it supports the governm## Ofcom’s Non-Statutory Approach to AI Regulation
In a recent submission to the committee, Ofcom, the UK’s communications regulator, expressed its support for a non-statutory approach to AI regulation. This approach, they believe, offers flexibility and can help prevent the risk of overlap, duplication, and conflict with existing statutory regulatory regimes.
Ofcom’s Readiness to Regulate AI
During a committee hearing, Ofcom CEO Melanie Dawes was asked about the organization’s readiness to take on the role of AI regulator. Dawes revealed that there is an ongoing work programme across the organization, which is being coordinated by Ofcom’s strategy team.
Building a Specialized AI Team
Dawes shared that Ofcom began building a specialized AI team five years ago. This team, which started with 15 experts on large language models (LLMs), now boasts about 50 AI experts among Ofcom’s 1,350 staff members. The team includes specialists in data science and machine learning, as well as experts in some of the newer forms of AI. Dawes also mentioned that there are ”quite a lot of different streams of expertise” within the organization, including a team of 350 people focused on online safety.
The Need for New Skills and Adaptability
“We do need new skills. We’ve always needed to keep building new technology expertise,” Dawes stated. When asked if she felt Ofcom was equipped to handle the challenges of AI regulation, Dawes responded affirmatively. However, she acknowledged the significant amount of uncertainty surrounding how this technology will disrupt markets. “We are open to change and adapt because Ofcom’s underlying statute is tech-neutral and not dictated by the type of tech. We can adapt our approach as needed,” she concluded.
Ofcom’s Preparedness for AI Regulation
During a recent committee meeting, one Member of Parliament (MP) voiced concerns about Ofcom’s readiness to regulate artificial intelligence (AI). The MP questioned whether Ofcom had enough personnel with the right experience and capability to handle this task.
Ofcom’s Response to Concerns
Melanie Dawes, the Chief Executive of Ofcom, responded to these concerns. She acknowledged that Ofcom has been operating under a flat cash budget cap from the Treasury for many years. She warned that this could eventually start to pose real constraints for the organization.
Dawes said, “We’ve become very good at driving efficiency, but if the government were to ask us to do more in the field of AI, we would need new resources. As far as our existing remit is concerned, our current resourcing is broadly adequate right now.”
Other Regulators’ Readiness for AI Regulations
The committee meeting also included other regulators who were questioned about their readiness for AI regulations. Information Commissioner John Edwards was among them. He emphasized the importance of communication across all parts of the AI supply chain, from developing models to training models and deploying applications, especially where personal data is involved.
John Edwards on Regulatory Challenges
Edwards expressed confidence in the existing regulatory framework’s ability to handle the challenges presented by new technologies. He said, “I do believe we’re well placed to address the regulatory challenges that are presented by the new technologies.”
Further explained that the existing regulatory framework already applied to AI and required certain remediations of risk identified. He highlighted the presence of accountability principles and transparency principles in the current framework.
Addressing Regulatory Challenges in AI Development
I want to assure you that there’s no regulatory gap when it comes to the recent developments in AI. This is a point I can’t stress enough. The advancements we’ve seen in AI are not going unchecked or unregulated. This was a point made by Edwards, who emphasized the importance of AI explainability principles.
ICO’s Guidance on Generative AI and Explainability
Edwards further mentioned that the Information Commissioner’s Office (ICO) has issued guidance on generative AI and explainability. This was done in collaboration with the Alan Turing Institute. “I genuinely believe we’re in a good position to tackle the regulatory challenges that these new technologies present,” Edwards confidently stated.
Collaboration in AI Regulation
Jessica Rusu, the Chief Data, Information and Intelligence Officer at the Financial Conduct Authority (FCA), chimed in on the discussion. “There’s a lot of collaboration happening both domestically and internationally. I’ve spent a considerable amount of time with my European counterparts,” she shared.
FCA’s Approach to AI Regulation
Rusu went on to discuss the FCA’s interim report. It recommends that regulators conduct a gap analysis to see if there are any additional powers they would need to implement the principles outlined in the government’s paper. This is to identify any potential gaps in the regulation of AI.
She also mentioned that the FCA has looked into the assurance of cybersecurity and algorithmic trading in the financial sector. “We’re quite confident that we have the tools and the regulatory toolkit at the FCA to step into this new area, particularly the consumer duty,” Rusu stated.
FCA’s Confidence in Regulating AI
“I believe, from an FCA perspective, we are content that we have the ability to regulate both market oversight as well as the conduct of firms,” Rusu concluded. This statement underscores the FCA’s confidence in its ability to regulate the rapidly evolving field of AI.
Our Journey with Algorithms
“We’ve been on quite an adventure, delving deep into the world of algorithms,” she shared with a hint of excitement in her voice.
Consumer Safety: A Priority
Imagine the hurdles that regulators might face when it comes to AI safety. A government paper published this week, just in time for November’s Bletchley Park AI Summit, gives us a glimpse into these challenges.
AI: A Global Effort with Global Challenges
The paper, titled Capabilities and risks from frontier AI, comes from the Department for Science, Innovation and Technology. It emphasizes that AI is a global effort. However, it also highlights that the path to safe AI development may be littered with obstacles. These include market failure among AI developers and collective action problems among countries. Why? Because many of the potential harms are borne by society as a whole. This means individual companies may not feel the need to address all the potential harms of their systems.
The “Race to the Bottom” Scenario
The authors of the report sound a warning bell. They caution that the fierce competition among AI developers to build products quickly could lead to a “race to the bottom” scenario. In this situation, companies developing AI-based systems might rush to develop AI systems as fast as possible, neglecting safety measures in the process.
“In such scenarios, it could be challenging even for AI developers to commit unilaterally to stringent safety standards, lest their commitments put them at a competitive disadvantage,” the report states.
A Pro-Innovation Approach to AI Safety
The government is setting its sights on a pro-innovation approach to AI safety. Prime Minister Rishi Sunak, in his speech about AI safety, emphasized the importance of honesty when discussing the risks associated with these technologies. “Doing the right thing, not the easy thing, means being honest with people about the risks from these technologies,” he stated.
The Committee’s Governance of AI
During a meeting of the committee governing artificial intelligence, Will Hayter, the senior director of the digital markets unit at the Competition and Markets Authority (CMA), was asked about the government’s proposals for consumer protection.
Understanding the AI Market
Hayter responded by saying, ”We’re still trying to understand this market as it develops. We feel very confident the bill does give the right flexibility to be able to handle the market power that emerges in digital markets, and that could include an AI-driven market.”
Proposed Legislation for AI Safety
As the proposed legislation for AI safety makes its way through Parliament, Hayter said the CMA would be working with the government on what he described as “important improvement on the consumer protection side”.
The AI Safety Summit
The AI Safety Summit is due to take place at Bletchley Park on 1-2 November 2023. This event will be a significant step forward in the discussion and implementation of AI safety measures.
Conclusion
In conclusion, the recent meeting of the Science, Innovation, and Technology Committee shed light on the readiness of UK regulators, including Ofcom, the FCA, and the ICO, in preparing for AI legislation. Ofcom’s non-statutory approach to AI regulation and the development of a specialized AI team indicate their commitment to addressing the challenges of AI governance. Information Commissioner John Edwards emphasized the existing regulatory framework’s ability to handle the challenges posed by new technologies. The Financial Conduct Authority’s confidence in its regulatory toolkit and consumer duty further underscores the regulatory preparedness. The government’s focus on a pro-innovation approach to AI safety and the forthcoming AI Safety Summit at Bletchley Park signify a commitment to addressing the potential risks and challenges associated with the rapid development of AI. As AI continues to shape our world, collaboration and adaptability remain essential, ensuring a balanced approach to fostering innovation while safeguarding consumer safety. The future of AI regulation in the UK promises to be a dynamic and evolving journey, guided by the principles of transparency, accountability, and safety.
Photo: Freepik.com