Author: Sabrina Gelbart. License: Creative Commons (CC0).
Visual messages have high impact. A picture speaks louder than a thousand words. So it is unsurprising that the ability to image the structure and function of the human body underpins and continues to revolutionise medical practice.
Medical imaging has come a very long way in the century since the first grainy and much celebrated X-ray image of Wilhelm Roentgen’s wife’s hand. Ian Donaldson’s first ultrasound showed a large cyst in the ovary. A later study showed a few echoes from a barely recognisable foetus. Today we visualise the antics of the unborn in real time and in 3D.
Scans using computed tomography (CT) used to take hours to obtain a single slice through the brain, without any differentiation between structures. Today, we can accomplish that feat with far greater resolution in less than a second.
The first ‘noisy’ MRI scanning images also took hours. Nowadays, we expect images of not only the anatomy but also the blood flow, function and molecular features of the tissue in a single comprehensive ‘multiparametric’ MRI examination.
And all this born of British invention! Ian Donaldson was a Scot, with his laboratory in Glasgow, Sir Godfrey Hounsfield, Nobel Laureate for his CT development, worked at the EMI Centre in Hayes and Sir Peter Mansfield, a physicist from Nottingham, received the Nobel prize jointly with Paul Lauterbur for their development of clinical MRI.
How do we use medical imaging in cancer?
We use imaging during three key stages in the cancer pathway – to make the diagnosis, to deliver therapy and to assess response to treatment. There are various screening programmes to detect and help diagnose asymptomatic disease.
The NHS breast cancer screening service established in 1987 diagnoses half these cancers in middle-aged women. Screening for lung cancer is being trialled with low radiation dose CT, and although finding incidental nodules (false positives) remains a huge issue, it does save lives.
For the purposes of screening, imaging also can be used in conjunction with other tests – for instance with a faecal occult blood (FOB) test for bowel cancer screening (a pack is mailed out to the over 60s), followed by CT colonography if the FOB is positive.
However, a CT colonography cannot ‘see and treat’ like a colonoscopy (direct visual inspection), so is under-used.
In ovarian cancer screening, a large trial investigated ultrasound in conjunction with a blood test—the CA125—but the positive predictive value for cancer was too low for it to be cost-effective.
The ICR's MR Linac machine combines two technologies to tailor the shape of X-ray beams in real time and accurately deliver doses of radiation.
Find out more
Assessments that inform treatment
Once a positive diagnosis of cancer has been made, imaging is used to describe the extent (“stage”) of the disease. Evaluation of tumour size and local, regional and distant spread must all be documented prior to embarking on treatment.
To understand how a tumour might behave, we currently amass a range of information from different types of imaging, and quantify and classify these data to extract features that inform on tumour function and predict growth patterns.
Imaging is vitally important for delivering targeted therapies. Cyberknife radiotherapy techniques use CT with markers for guidance. A new machine called an MR Linac delivers radiotherapy using the superior contrast of MRI to track the tumour.
We can burn away tumours very accurately using a non-invasive technique that focuses sound waves (high-intensity focused ultrasound, or HIFU) under MRI guidance. Or we can radiolabel drugs and assess their distribution and dose to tissues using imaging.
To assess response to treatment, it is often the norm to do multiple types of imaging at multiple time points. Numerical information is extracted from these extensive datasets and helps indicate whether the treatment is working. Different types of images tell us different things about the tumour. It is an awful lot of imaging and a minefield of data.
Data Overload
The main challenge in imaging today is the overwhelming overload of information. The sheer volume of imaging, not just in number of patients imaged annually in radiology departments everywhere, but in the number of images (often several thousand) generated for each patient examination, leads to inattention blindness.
A landmark study in the US, much publicised in the news, showed that when searching for lung nodules on CT scans, 20 of 24 radiologists did not spot a picture of a gorilla embedded in images of the lungs, even though eye-tracking showed they looked directly at it!!
More worryingly, only 55% of nodules were spotted. It is essential to find means for dealing with the explosion of imaging data that is being generated. And that is even before considering the serious manpower (and woman power) crisis in radiology.
Inattention blindness: The gorilla in the chest.
Artificial intelligence is coming
Artificial intelligence, (AI, aspects of which are also referred to as machine learning, deep learning, or artificial neural networking) learns through trial and error, and is more robust and less variable than humans in its output (although it obviously depends on the quality of the data you put in!).
How often is our momentary guilty pleasure at clicking on a desirable article online rewarded by a relentless barrage of similar items every time we access the internet? The machine has ‘learned’ our preferences in a few clicks!
In an imaging department, this kind of ‘intelligence’ can be harnessed to automate the setting up of patient scans and acquisition of data from them. Not rocket science, but with a huge impact not just on workflow, throughput and costs, but also in achieving more precise comparisons and hence better diagnostics. Less is left to chance when estimating tumour response for instance, because all the data is perfectly aligned.
For diagnostic purposes, we can prescribe the imaging features which we wish an algorithm to use to discriminate cancer. This is supervised ‘machine learning’. Deep learning goes a step further – it is adaptive and is based on other outcome data (pathological information, patient outcomes). It has a greatly superior performance to supervised machine learning for discriminating ‘normal’ from ‘abnormal’ or ‘bad’ from ‘good’.
Automated image registration and recall systems are already finding their way into radiology departments at reasonable costs. Algorithms can cost as little as $1 per scan analysed, and can be fully integrated with hospital information and the picture archiving and communication systems (PACS) used in radiology departments.
Making a step change
Just like Darwinian evolution, an evolutionary approach to AI will bring bigger gains. An evolutionary algorithm approach for diagnosis employs thousands of algorithms at the outset to come up with the best match answer.
The worst 50 per cent are dumped, and the rest are each ‘mutated’ in a single way (a change to one mathematical parameter), and the process repeated. In this way, the poorer algorithms are always discarded, and the best ones emerge that are most fit for purpose.
But even this may not be enough. After all, it is not just a tumour but a whole patient at the end of each decision. In complex cases, where various other factors need to be considered, algorithms will fail or give the wrong answer. What we need here is amplified human intelligence.
Swarm AI
Swarm AI is becoming a boom industry for predictions, and there are several impressive studies showing its power for predicting all sorts of things from which horse will win the Epsom Derby, to who will win the Oscars.
Swarm AI is not a crowd aggregate, i.e. an average of votes of all the participants. It is a closed loop feedback system, so that the decision of each participant is determined by decisions of nearest neighbours and the group swarms together to home in on the target.
Even if the swarm AI participants are uninitiated in the area – for example horse racing – the collective effect is that a correct prediction can be reached.
Lessons from AlphaGo
Another lesson in achieving a step change comes from Deep Mind’s recent coup with its algorithm AlphaGo. The ancient Chinese game of Go with hundreds of billions of possible moves was successfully ‘learned’ by this algorithm, which was trained on hundreds of datasets from professional human games.
It eventually defeated the previously unbeaten human world champion from Korea, Lee Seedol. In a subsequent iteration, the algorithm did not receive any training (AlphaGoZero) but learned by playing against itself starting with random moves.
This matched opponent strategy meant it got stronger and stronger and stronger and in a quote from Deep Mind ‘removed the constraints of human knowledge… and can create new knowledge’.
This concept of self-learning when applied to what combination of imaging features are associated with genetically and clinically aggressive tumours may in future give us a better handle on tumour behaviour.
Visualising when treatments are activated
Another avenue for exploration is in the design of therapeutic agents that can be imaged upon activation. This will allow calibration of the treatment administered in real time based on imaging response.
Examples of such agents are already emerging – a photosensitiser molecule linked to a chemotherapeutic agent is inactive by virtue of its linkage (Hu et al, Biomaterials 2017), but when the link is broken by enzymes in the tumour, the photosensitizer fluoresces.
This indicates that the link is broken, and that the chemotherapeutic drug is active. With the photosensitizer on board, photodynamic therapy can also be delivered as a treatment!
So what will imagers do?
In the future, we will have AI algorithms to seek out an abnormality, automatically position over it and obtain relevant, detailed scans of the region of interest. They will also track this region and other suspicious areas automatically at the next visit.
So with computers being so clever, what role will there be for human imagers?
Well, we will be the creators and inventors of new imaging techniques. We have had PET, PET-CT, PET-MR and maybe HyperPET which combines metabolic imaging from two different modalities (MR and PET). But there are other exciting combinations such as photoacoustics (‘the sound of cells’), and yet others waiting to be dreamt up. Also, it is not just about diagnostics; we need to combine diagnosis with therapeutic options and become the imaging clinicians of the future.
And there will be virtual reality to get to grips with to administer these treatments. It already exists in the surgical environment and is finding an increased use in training, intraoperative planning, and for doing procedures with real-time input from multiple operators who are geographically far away. A whole new world of networking and discovery!
Professor Nandita deSouza is Professor of Translational Imaging at The Institute of Cancer Research, London, and an honorary consultant at The Royal Marsden NHS Foundation Trust.
AI and deep learning are having an impact on many areas of research at the ICR – including Big Data analysis (Dr Bissan Al-Lazikani) and computational pathology (Dr Yinyin Yuan).
comments powered by