AI implementation in healthcare
AI has potential to revolutionise the healthcare sector, including by enhancing diagnostic accuracy, predicting disease progression, and enabling automatic personalisation of treatment plans. Combined with effective filing and communications systems, it can also streamline administrative tasks (including appointment management, scheduling and billing), allowing healthcare professionals to focus more on patient care.
Although AI can make healthcare more efficient, generative AI needs to be trained - both prior to implementation and on an ongoing basis. In the medical sphere this will ultimately involve the use of ‘live’ patient information. Obtaining consent where required for such processing is notoriously difficult within the sector. The level of distrust that patients and their families exhibit towards all types of data sharing and analysis is notable. Public confidence that their data will be used correctly for AI models is also low. But if the sector is to benefit from technological advances that needs to be faced and addressed.
The Information Commissioner’s Office (ICO) is broadly supportive of AI, although it highlights the need to take care and properly assess and plan out implementation to ensure that compliance with the UK GDPR and related legislation is achieved. In a recent statement, Stephen Almond, Executive Director of Regulatory Risk at the ICO was clear that “any organisation using its users’ information to train generative AI models needs to be transparent about how people’s data is being used. Organisations should put effective safeguards in place before they start using personal data for model training, including providing a clear and simple route for users to object to the processing.” Although that discussion was centred around Meta’s plans to use Facebook and Instagram user data, it still has implications in healthcare where data privacy and ethical AI usage are fundamental if AI use is to take off.
So how can the healthcare sector move forward? Proper consultation with stakeholders, acting upon feedback, and ensuring that real choices are available as to how and to what extent individuals will consent to their data being used, will help to reassure patients that their data is in safe hands. A willingness to listen to and work with patients and stakeholders from diverse backgrounds will help organisations to avoid poor decision-making. Consultations should be properly planned, resourced and the outcomes recorded. Changes indicated by feedback should then be made, with data privacy by default and design being paramount.
The sector also needs to ensure that it adopts ethical AI practices, so that AI models both act within ethical boundaries and are trained on data that is collected and used responsibly. To ensure that ethics are factored, consider updating your change control processes to ensure ethical analysis is given sufficient time. Sector participants are no strangers to ethics committees in relation to treatment and research – so this change will hopefully be relatively straightforward to implement.