A new horizon for AI regulation
The buzz word of 2023 was undoubtedly AI which became unavoidable in the tech world, particularly due to the introduction of the generative AI, ChatGPT, in late 2022. Despite the widespread opportunities and uses for AI in all industries, it is not surprising that legislators, both in the UK and further afield, are looking to introduce regulation of AI to combat the risks it poses. Regulation of AI is in early stages here in the UK, but with the dawn of the EU Artificial Intelligence Act, it will be interesting to see if the UK follows suit, particularly following a recently introduced Private Members’ Bill which may provide the necessary foundations for UK AI regulation.
Artificial Intelligence (Regulation) Bill (the Bill)
The Bill was introduced by Lord Holmes of Richmond to the UK Parliament on 22 November 2023. As the Bill is a Private Members’ Bill (PMB), it is not part of the Government’s planned legislation. Success of PMB’s are often limited. However, given the rapid increase in the deployment of AI and the potential risks it poses, regulation of AI will inevitably appear on the Government’s agenda soon and this Bill could provide the framework.
The main aspect of the Bill is to introduce an AI regulator called the AI Authority whose responsibilities would include:
- ensuring regulators take account of and align their approach on AI
- review of relevant AI legislation
- monitoring and evaluating the regulatory framework
- implementing the principles introduced by the Bill, namely
- safety
- security and robustness
- transparency and explainability
- fairness
- accountability and governance
- contestability and redress
The Bill also allows the Secretary of State to introduce regulations to require businesses that use AI to have a designated AI officer, and for records to be created of all third-party data and intellectual property used in the training of AI.
The Bill signals a sea change from the current system whereby AI is indirectly regulated through existing legal frameworks in the UK such as financial services, product safety and consumer rights laws. However, there are gaps in the existing laws as far as AI is concerned and this is when risks occur.
The EU position
The EU is further advanced in its journey to implement regulation on AI with the Artificial Intelligence Act (the Act) due to become law in the EU in early 2024.
The Act will regulate AI systems in the EU, using a proportionate risk-based approach by categorising AI. The highest risk systems which pose significant risks to health, safety or fundamental rights of persons will have to comply with risk assessment procedures before the systems can be used. There will also be transparency obligations on generative AI requiring disclosure that the content is generated by AI. Enforcement of the Act will rely on the Member States designating a competent authority, and there will be establishment of a European Artificial Intelligence Board for increased efficiency and as an official point of contact with the public and other counterparts.
Final thoughts
Despite a few similarities between the Bill and the Act (such as the focus on transparency), it is clear that the EU’s position to regulation of AI is much more advanced. The Bill does not set out any kind of risk-based approach to assessing the type of AI, nor does it introduce any procedures for assessing the AI’s risks. This may, however, be due to the UK’s stance on taking a pro-innovative approach to AI as signalled in its Policy Paper (A pro-innovation approach to AI regulation, Updated 3 August 2003), and its reluctance to impose too much regulation, which could stifle innovation and competition by causing a disproportionate amount of smaller businesses to leave the market.
The Bill has only had it first reading, and as it's a high-level framework, it is difficult to predict whether Parliament will look to take it further. It is, however, a clear signal that the issue is on the agenda, and the UK is likely to feel pressure to take action.