AI in Focus: Improving magnetic resonance imaging

MotionCorrection.jpg

Magnetic resonance imaging has revolutionised medicine and medical research, but one of the biggest issues radiographers still grapple with are the artefacts created when patients shift around during the scan.

Now machine learning techniques are being used to teach magnetic resonance technol- ogy to recognise these artefacts and remove them, giving a much cleaner image for the clinician or researcher to work with.

A single magnetic resonance scan can take anything from few seconds to 30 minutes, and can resolve details less than a millimetre in size. Unfortunately, this means that if the person being scanned moves by more than a millimetre, the resulting image is distorted by ripples that can blur vital structural details.

The current approach to dealing with these artefacts uses motion capture technology— similar to that used for computer-generated imagery in the film industry—to record the patient’s movement and adjust the image accordingly. This requires use of a special camera that can function while an MRI scan is taking place, and is both painstaking and expensive.

A team of researchers from Monash Biomedical Imaging are applying what’s called instrumenta- tional deep learning —where the instrument itself has embedded deep learning algorithms to process the raw images—to the problem. To begin with, they acquired a series of normal magnetic resonance images using healthy volunteers, and used these to genereate simulated motion artefacts. This learning data set was then used to train a machine learning model to recognise and remove these motion artefacts.

“The training of all these algorithms is very intensive,” says Dr Kamlesh Pawar, research fellow at Monash Biomedical Imaging. “It’s not possible to do this kind of training on a personal computer, as high performance computing is needed.”

Then when the system encounters a scanned image distorted by movement, it can recognise the artefacts and remove them from the image resulting in an image that visually appears to be artefact free.

“I’m just amazed at how good it does,” says Professor Gary Egan, director of Monash Biomedical Imaging. “With a typical system, you might get a 3%–5% error rate and with these deep learning techniques we’re down to 1%–1.5% residual error.”

So far the system has been trained using images from healthy volunteers. The next step is to train it using images from real patients with clinical disorders. This will teach the system not only to identify movement artefacts but tell the difference between a movement-related artefact and a genuine disease-related pathology.

Wojtek James Goscinski