Is Healthcare AI Safe?
- Date
- 25 Mar 2025
- Start time
- 7:00 PM
- Venue
- Tempest Anderson Hall
- Speaker
- Dr Mark Nicholson, University of York
Is Healthcare AI Safe?
Dr Mark Nicholson, Reader in System Safety, Department of Computing, University of York
System Safety aims to prevent accidents and reduce risks associated with technologies employed in systems such as cars, aircraft and healthcare medical devices. Information systems such as electronic health records, provide information for clinician to make good patient care decisions.
System safety has had great success at ensuring software used in such systems benefits society and does not cause harm. For new systems safety is built in and for in-service systems effective safety risk management is undertaken. Disruption however comes in the form of Artificial Intelligence (AI). AI offers a vast potential to improve system outcomes and enhance efficiency. In healthcare AI helps with diagnostics & early detection of diseases, offers new ways to create personalized medicine and treatment plans, supports the potential for robotic surgery, increases healthcare administration efficiency and supports access to mental health services. However, for AI to truly revolutionize healthcare, challenges like safety and ethics need to be addressed. In this talk we investigate AI safety for healthcare and the role of system safety in ensuring patient safety and providing a safety case. We show how the University of York is at the forefront of AI safety research.
Mark Nicholson is a Reader in System Safety in the Computer Science Department at the University of York. He has 30 years’ experience researching software safety and educating industrial practitioners in system safety. He has provided numerous courses in digital and AI safety to clinicians. He has authored a number of industrial standards including the current safety certification standard for large civil aircraft. He is a Director of the UK Safety-critical Systems Club.
For the last five years he has been part of the core technical team in a £12m programme sponsored by Llloyds register at the University of York. This programme has now become the Centre for Assuring Autonomy with a further £7m sponsorship. This centre includes a centre for doctoral training in AI safety that will eventually include more than 60 doctoral students.
7pm in the Tempest Anderson Lecture Theatre in the Yorkshire Museum
Member’s report
The underlying technology of Artificial Intelligence (AI) is based on data models that transform input data into more useful output data and AI creates these data models, an example being breast screening images where a decision can be made whether an image indicates cancer or non-cancer.
AI can be used for Healthcare administration and efficiency in information for wellness services as well as information systems for patient care. AI can also be used for electronic medical records.
We may want to use it for medical robots for which there are 6 stages of autonomy :-
0 no autonomy
1 robot assistance
2 task autonomy
3 conditional autonomy
4 high autonomy
5 full autonomy
Work is progressing on these robots and decisions need to be made on how far up the scale we want to go.
There are chatbots in healthcare both for symptom checking and mental health support. Studies have taken place as to whether data models are effective in healthcare. Massachusetts Institute of Technology found that many hundreds of predictive tools had been developed; none has been effective and some were potentially harmful. The British Medical Journal found that of 232 algorithms for diagnosing or predicting how sick a patient might become, none was fit for clinical use. AI could help with this but significant issues need to be addressed.
The legality of the technology is covered by AI guardrail 1- UK medical devices. AI SAFE are regulated under ‘Medical Devices Regulations 2002’ by MHRA. AI guardrail 2 covers the ethics approval. The WHO principles for the use of AI (2023) has these key points:
o Protect human autonomy
o Promote human well-being, safety and public interest
o Ensure transparency explainability and intelligibility
o Foster responsibility and accountability
o Ensure inclusiveness and equity
o Promote AI that is responsive and sustainable
Past experience of the safety of technology in the 1980s shows that of those receiving THERAC treatment, 25 patients were given massive overdoses of radiation because of software failures.
In the 2020s IT failures caused patient deaths, some diagnosed with lung cancer were not followed up or given the wrong medications because of a mix up with electronic notes. Computer failings were found in nearly every investigation. Building safety into system engineering can prevent accidents caused by engineered artefacts and have an emphasis on system safety with effective safety risk management employed on in-service systems and assure a good job by building a safety case.
During covid-19 a personal risk ‘bowtie’ approach was considered, identifying threats and a likelihood management through preventative measures to an infection event followed by response and mitigation measures through to consequence management.
AI safety in healthcare must have the absence of unacceptable risk of harm caused by its use and falls into 4 broad categories:
o Accidental physical harms arising from AI failing or acting in unanticipated ways
o Harms arising from misuse of AI based systems
o Structural harmonising from systems altering dynamics of social, political and economic systems
o Upstream harm arising such as inappropriate collection and use of data
AI guardrails can be used to reduce risk:
o Conventional engineered controls such as online clinical chatbots using a lookup table of trigger words which would stop the dialogue and connect the user to a human for immediate support.
o Design controls so that failure of AI leads toward safety and not away from it. Human based controls so that there is sufficient time for humans to make informed decisions and exercise control.
o AI development methods provide controls with the choice of learning technique justified using analysis so that the resulting model can be assured.
Further research is needed on the above.
The University of York is involved with AMLAS safety case Development Guidance. This is concerned with Data Management Assurance to ensure:
o Relevance – that it matches the intended healthcare situation
o Completeness – across healthcare situations that will be encountered by the model
o Accuracy – that the data is correct
o Balance – that it is appropriately distributed across each ‘class’
It is important that healthcare should remain Human Centric especially with regard to responsibility and liability. If a patient safety incident occurs in a healthcare pathway employing AI, questions arise as to who is responsible, who pays any damages, whether a clinical practitioner can get insurance, what it is realistic to ask of a human in this situation.
The UoY is addressing AI safety in Healthcare with a ‘Centre for Assuring Autonomy’ with an aim for doctoral training in safe AI with a vision to train future leaders with research expertise and skills to ensure that benefits of AI systems are realised without introducing harm as systems and their environments evolve.
The overall message was that Healthcare will be safe enough but not by accident but by responsible Innovation:
o Improving understanding of what it means to be safe.
o Doing system safety well.
o Improving safety practices where necessary.
o Improving risk acceptance criteria.
o Improving the dynamic through life risk acceptance.
Jon Coulson