CAMBRIDGE CATALYST ISSUE 04

AI SPECIAL

John Cassidy, CEO of Cambridge Cancer Genomics (CCG.ai), considers the role of artificial intelligence in healthcare

manage to record a patient’s race, then perhaps clinical follow-up was less than robust. Indeed, most cases of ‘unknown race’ within our training set were from a single clinical centre in the USA. In cases like the above, identifying biases in our training sets through the use of interpretable machine learning models could have an unforeseen benefit: we could use them to correct human biases. The real danger comes when we use models with poor interpretability. If unchecked, AI could amplify and perpetuate biases that already exist in healthcare. But what if we were to use powerful black-box models in data sets without any bias? Aside from being a technical challenge, AI can be ‘brittle’. While a model trained on data from our best clinical centres may performwell in our best in a similar scenario, they may break when exposed to brand-new data. At CCG.ai, we are building software tools to enable AI-powered precision oncology for all patients, so it’s important to ensure that AI doesn’t perpetuate inequalities already present in healthcare. Therefore, we need to 1) uncover and reduce biases already present in our healthcare systems, and 2) ensure a wide variety of medical data sets are free for researchers to train from.

rtificial intelligence (AI) has the potential to transform healthcare as we know it. But will it make

patterns and connections in large and often apparently unconnected training data sets. While these could tell you that you are more likely to die of heart disease, for example, they are terrible at explaining the hidden logic between connections and distinguishing correlation from causation. In some ways this is a familiar concept in biomedical science – we still don’t understand how paracetamol works on the molecular level, for example. Doctors may be keen to deploy whatever tool is most effective, regardless of our deeper scientific understanding. For healthcare, this raises an interesting question: do we need to understand the reasons for our diagnosis or treatment regimen? Or is it enough to know they will work? More interpretable methods in machine learning may offer some assurances to regulators by at least attempting to define the most important features of a decision. Indeed, these methods often uncover hidden biases within data sets. In our work at CCG.ai, for example, we uncovered a signature correlated with poor response to chemotherapy for breast cancer patients. Unfortunately, one of the most important individual features identified to be associated with poor response was race. In this case, we did not uncover one of the deeper socio- economic issues in our healthcare system, in fact ‘unknown race’ was far more associated with relapse than any other label. We reasoned that if a hospital did not

healthcare fairer and more accessible for all, or will AI lead to a widening of healthcare disparities between the rich and the poor? AI, and in particular deep learning, excels at uncovering non-obvious connections in large data sets. For this reason, it is hardly surprising that healthcare professionals are excited about the possibilities of AI. In general, the aim is not to replace existing healthcare professionals, but to give them the tools they need to reduce mundane or repetitive tasks. With AI that can process thousands of images in minutes, a doctor’s time can be freed up to focus on providing experience- based judgements on difficult to diagnose ‘edge cases’. As our population ages and the burden on our healthcare systems increases, there is also a huge economic incentive to develop intelligent systems to improve healthcare efficiencies. For these reasons, it is not surprising that hardly a day goes by without a new announcement on how intelligent algorithms will soon be able to diagnose cancer, macular degeneration or heart disease better than many experts. However, as with all new technologies, there are limitations that need to be addressed. Perhaps the most glaring in healthcare are those of ‘black-box algorithms’. Artificial neural networks (ANNs), a specific type of deep learning, for example, excel at finding hidden

Identifying biases in our training sets through the use of interpretable machine learning models could have an unforeseen benefit: correcting human biases"

17

ISSUE 04

cambridgecatalyst.co.uk

Powered by