A whistle-stop guide to managing disputes risk from the development of AI
In the last couple of months we’ve seen the directors of a taxi business sue the creator of an AI route mapping app for professional negligence, the Master of the Rolls predicting that, in the future, it could be considered negligent not to use generative AI, LexisNexis looking to roll out their new Lexis+AI in the next quarter, and the Government publish its response to the consultation on AI Regulation (essentially promoting innovation).
Starting with the Government’s response - it is, in short, that we must proceed with adopting a principles-based approach. But what that means is that companies’ frameworks and risk management must be properly managed.
To date, most substantive legal claims have been international and not in the UK. They often relate to copyright infringement lawsuits (in Silverman et al v OpenAI Inc et al and Tremblay et al v OpenAI), but other potential claims are likely to be brought in the near future. Unsurprisingly given the exponential rise in use of generative AI by businesses, we have already seen breach of contract claims. These include (i) Tyndaris SAM v MMWWVWM Limited (about investment in an AI system by a third party, which it is understood settled before trial in 2020); and (ii) Leeway Services Ltd v Amazon Payments UK Limited (claiming that its suspension from the marketplace was caused by AI, and so it did not make its expected online sales) issued back in June 2021.
We now understand that an English High Court claim in professional negligence has been brought by the directors of a taxi business against the creators of an AI route mapping application (Tailor and another v Gittens). So, what could be next? Claims arising out of data protection breaches, negligence in the design process of application, product liability claims, database right disputes, trade secret disputes, breach of confidence disputes, defamation, data protection disputes, discrimination and harassment to name a few.
Companies must therefore manage disputes risk on an ongoing basis and be aware of the potential legal pitfalls throughout the development of AI. But how can this be done? Is it even possible? The reality is that there’s no hard and fast rule, but the elements below will certainly put your company on the right path.
Avoiding potential legal pitfalls
Intellectual property minefield
Develop a robust strategy for intellectual property protection and clearly define ownership rights. Remember, it’s not just about who thought of it first; it’s about who can prove it in court (as we’ve recently seen in the Bitcoin inventor saga). Keep this in mind at the design stage where possible!
Data privacy tightrope
The UK General Data Protection Regulation (GDPR) governs compliance with data protection regulations. Transparent communication is key with users. Personal data must be guarded – take this into account at the design stage and adopt guidance notes or methodologies to employ ethical management strategies throughout.
Explainability quandary
Be ready to demystify your AI magic by ensuring your algorithms are interpretable and can be easily explained to a lay person. An easy way of doing this is by setting out proper and implementable internal governance procedures to manage the development, implementation and monitoring of AI systems.
Bias backlash
Everyone is talking about AI bias because of the effect it can have in reducing the accuracy of AI systems, making them less effective. Further, it can lead to mistrust among marginalised groups. So, what can be done to manage that risk? Regularly audit your algorithms for bias, and make sure your training data isn’t reinforcing stereotypes. Where possible, check output with a “human” element.
Contractual conundrums
Contracts are your legal armour – wear them proudly. Clearly define terms, responsibilities, scope of work/specification and liabilities in your contracts from the start to the end. Consider the level and type of liability and who bears the risk at each stage of the process such as creation, training of the AI, implementation, monitoring and maintenance. Think about the indemnities and warranties being provided. There is no place for ambiguity. A well-drafted contract is like a good insurance policy – you hope you never need it, but you’ll be glad you have it if you do.
Service providers need to be aware that contracts will have the implied term that it will carry out a service with reasonable care and skill unless the parties agree to exclude it (section 13 Supply of Goods and Services Act 1982). A court is likely to look at matters such as reasonableness of the use of AI when providing a service, and also look at the relevant checks undertaken throughout.
AI safety policies
The inaugural AI Safety Summit at Bletchley Park last year provided a great platform for the discussion of AI safety. Leading AI providers have safety policies in place as should everyone else.
Key takeaways
- Legal dream team: Assemble a legal dream team that understands the nuances of AI, data protection and technology law (to name a few). They will bridge the gap between the tech and legal worlds.
- Continuous compliance: Treat compliance like your morning coffee – non-negotiable. Regulations evolve, and so should your compliance efforts. Stay informed and adapt swiftly.
- Crisis communication plan: Develop a crisis communication plan as when disputes arise, communication is key.
- Ethics as a North Star: A true commitment to ethical AI from day one will bear fruit and foster an envy-worthy reputation, which will be worth its weight in gold.
In the ever-evolving landscape of AI, disputes are not a matter of “if” but “when”. Companies must embrace a proactive approach to risk management and our experienced team are here to help at each step of the way!