Safety in the white space
Understanding how and why something works is critical be able to evaluate if it is safe to use. Computers and programmes deliver predictable results within the confines of different programming languages which, if not properly understood, can lead to unexpected results. Javascript, the most commonly used programming language is infamous for presenting results that may seem to be incorrect if you’re not familiar with its internal rules of operator precedence and associativity. There’s an example below for anyone who wants to take away homework can you explain why…
1 < 2 < 3 evaluates to TRUE
*but*
3 < 2 < 1 evaulates to FALSE
I think this level of understanding is going to become increasingly important as we move to evaluate and deploy AI based technologies in health – something we’ve been thinking a lot about this week. Although clear guidelines and frameworks exist in healthcare for Clinical Risk Management in IT Systems and Software as a Medical Device, these standards don’t yet cover everything. Whilst the UK is on a path to understanding our approach to how we should assess AI as a Medical Device, with this week seeing the launch of the MHRA AI Airlock programme, we need to have frameworks to understand how to assure these products as best we can in the here and now.
This week has seen us start to develop a local framework in GM to help teams wishing to deploy AI technologies to make safe decisions. This will help people navigate the complexities of Medical Device Regulation (with a focus on statements of intended use); DTAC; Clinical Risk Management and support to appraise the populations used and the methodology applied during algorithm development. No one person can, or should be making all these assessments, but we need to find a way to leverage local expertise from institutions such as MMU and The Christabel Pankhurst Institute to help make the best decisions we can. Building these relationships and sharing knowledge will be the best way to help clinical and operational teams focus their expertise on improving care and clinical pathways without the need for them to become AI experts in the process!
This work is at an early stage and will need to evolve in response to any regulatory changes and as we gain experience of using the framework. If you’re a team or supplier working in GM on AI based patient facing technologies I’d love to hear your views and feedback to understand how we should grow and iterate this – the aim is to help, not hinder and any input would be hugely valued.
Leave a Reply