Deep learning is a relatively new version of machine learning. The algorithms are able to autonomously recognize patterns in image data. For this purpose, experts feed them with a large number of images of a particular organ, such as CT scans of the liver. By using these example files, the software searches for typical features that appear in all images. After training, the algorithm is able to recognize and highlight the liver in previously unseen CT images. The more data the algorithm receives during training, the more accurate the results will be.
The method is particularly helpful for a procedure called segmentation. This step captures exact organ outlines in the medical image data. Common segmentation software systems search for predefined image features such as differences in grey scale values. An adaptive algorithm, however, chooses itself the features that lead to successful pattern recognition. “This algorithm delivers better results much faster,” says MEVIS researcher Hans Meine. “This is why deep learning as a supplementary tool is indispensable for us.”
At the Medical Imaging conference hosted by the International Society for Optics and Photonics (SPIE), the Fraunhofer MEVIS researcher Jennifer Nitsch presented an algorithm that can segment ultrasound images of the brain. One possible application is a system to support neurosurgeons during procedures. To operate as precisely as possible, surgeons follow an MRI scan of the patient’s head taken prior to the procedure. One problem is that, due to the instability of brain matter, the shape of the brain changes once the cranium is opened.
In the future, one trick will help adjust the MRI scan to new situations. During the procedure, ultrasound images are taken. Based on these images, a software program will convert the MRI scans to show the new, changed situation. Using this, the surgeon will constantly have an updated ‘map’ of the patient’s brain. One prerequisite for reliable adjustment of MRI and ultrasound images is that the software program should be able to segment them automatically. “Thanks to deep learning, we were able to significantly improve the segmentation of ultrasound images,” says Hans Meine. “The self-learning algorithms are highly beneficial for us.”
An additional field of application is the process of image registration, in which the computer aligns images taken at different times to allow optimal comparison. At the SPIE conference, MEVIS researcher Alessa Hering presented a self-learning algorithm to facilitate follow-up examinations of patients with lung tumors. Has the lump in the patient’s lung grown in a couple of weeks, or has it shrunk as anticipated following the therapy? In order to assess this, recent and older images need to be aligned to show exactly the same structures. Fraunhofer experts have significantly accelerated this automatic image registration. “We were able to achieve a 40-fold acceleration of the already efficient registration at an acceptable quality,” says Meine. “Previously, the process took eight seconds. With deep learning, it takes only 0.2 seconds.”
Source: Fraunhofer MEVIS