What’s on : Lectures
Event Information
Is Healthcare AI Safe? An update
Dr Mark Nicholson, Reader in System Safety, Department of Computing, University of York
System Safety aims to prevent accidents and reduce risks associated with technologies employed in systems such as cars, aircraft and healthcare medical devices. Information systems such as electronic health records, provide information for clinician to make good patient care decisions.
System safety has had great success at ensuring software used in such systems benefits society and does not cause harm. For new systems safety is built in and for in-service systems effective safety risk management is undertaken. Disruption however comes in the form of Artificial Intelligence (AI). AI offers a vast potential to improve system outcomes and enhance efficiency. In healthcare AI helps with diagnostics & early detection of diseases, offers new ways to create personalized medicine and treatment plans, supports the potential for robotic surgery, increases healthcare administration efficiency and supports access to mental health services. However, for AI to truly revolutionize healthcare, challenges like safety and ethics need to be addressed. In this talk we investigate AI safety for healthcare and the role of system safety in ensuring patient safety and providing a safety case. We show how the University of York is at the forefront of AI safety research.
This lecture will be held on Zoom at 7.30pm (GMT) and invitations will be sent to YPS members and the general mailing list two days before the event. This is a free event but non members can help to cover our lecture programme costs by donating here:
https://www.ypsyork.org/donate-to-yps/
Member’s report:
The initial part of the lecture revisited the topics covered in the lecture given on 25th March, so those are not repeated here.
Additionally issues were covered regarding ‘safe’ AI, for example ‘Gaslighting’ Chatbots where a user asked about local cinemas showing Avatar:Way of Water; the chatbot became confused about the current year, refused to accept the user’s assertion as to the correct year, accused the user of being wrong, confused, rude and trying to ‘annoy’ it. The user terminated the session but if this had been in a medical situation with a vulnerable patient the outcome would have been unacceptable.
Another issue identified was ‘Killer’ robots, where an example was given of a man crushed to death by a robot in South Korea. Here it failed to differentiate the victim from the boxes of food it was handling and the robot arm grabbed him, pushing him against a conveyor belt causing crushing his face and chest, injuries that proved fatal. Again these would be catastrophic in a medical situation.
The University of York has a ‘Centre for Assuring Autonomy’, building on an existing reputation for York. Starting in September 2026 there will be a MSc SCSE which will seek to understand how software based complex products and services undertake tasks that pose a risk to human lives and become a leader in keeping people safe. Additionally there will be a centre for Doctoral training in safe Artificial Intelligence systems with 65 DPhil students. There are 2 challenges for this:-
1 Safety of AI-enabled mobile autonomous systems in open contexts
2 Safety of Human-AI teaming.
The take home messages were:-
Healthcare systems are changing, AI capability replacing existing human coded healthcare software and the nature of safety challenges.
Response to AI – guided responsible innovation as listed below
o Improved understanding of what it means to be safe
o Do system safety basics well
o Improve safety practices where necessary
o Improve risk acceptance criteria for innovations
o Improve dynamic through life risk acceptance
Jon Coulson
